var/home/core/zuul-output/0000755000175000017500000000000015145451463014535 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015145456001015472 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000223171415145455630020270 0ustar corecore[ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs$r.k9GfB )?KYEZ͖o_˖wKo///o}͛ji/hxK}l11O6EYn*jn獼خx~̖K^_/_p/Jz\,W]EoO/(̗?<x Ζbx= x% +#4^ 8D^ώI8&xėf9E៾|3FmZl⇓8T*v (6pk**+ Le*gUWi [ӊc*XCF*A`v cXk?`QlrTvb)EZW3)7ɀ;$#LcdHM|J^[^Sg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? r.|I"n j/\U R[EC 7g/7_E'c/z&BBbm1lχtO Ң`?Tӣ 5W=Xz`̷~F<5n|X&p@J$tι#&i 5gܘ=ЂK\IIɻ}b{|;_-i!vg''H_`!GKF5/O]Zڢ>:O񨡺ePӋ& ofEnL!?lJJYq=Wo/"IyQ4\:y| 6h6dQX0>HTG5QOuxMe 1ķ/5^Z-y`)͐-o΁qGWo(C U ?}aK+dLdW3RG؍:-~<*KmrI,7k^i̸.y ^t }|#qgb2oII"9 1"6Dkſ~IoŊW9ȝQEkT/*BR =~*.h4(^&-Wg|]OBoEF^j=į`Pýfr JoL`~}PSSii4ȷT (Dn@6_V3Eqj}}r4(9izh38u'8KwI~3v4&8[qߏ5.)Q VE JN`:a!KM/9 bKkފE uIo1]ߔr TGGJ\B BR 4rJ:-³|lՐ0A_Fw)(c>b;,ľOv%\ MޠPBUB1J!dߙPRzsa™:']*}EXɧM2@:jʨΨrPE%NT&1H>g":ͨ _ʄKJ=5OͩLH/:;ߡՖQʡCOx]*9W C۳6)SCVOאL*򴆔l=q VJީ#b8&RgX2qBMoN 1ђZGd m 2P/Ɛ!" aPVL pSROޔ8'mzXP>Qwf8*c4˥Ęk(+,«.c%_~&N%80=1Jgͤ39(&ʤdH0Ζ@.!)}G2p cL1%'4-1a_`[[z㧦lk˭c Ěϕρ_} Uwt `~ߛUIvl.4`P{d056 5w}'9vh;l$>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 @$;T-n,'}6ȴ .#Sq9}5zoX#ZVOy4%-Lq6d b}O$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::d\;ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByX/&Ksg3["66hŢFD&iQCFd4%h= ztKmdߟ9i {A.:M {bZo:Xko;$UYwS1dӧl 5Yp$'}Zv"ꒄℬT ٪ȿ$jXWFI#R޸B4vOL-LIP E&G`JS[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTn!tT̅Rhɇ ќuޏ¢6}#LpFD58LQ Lf~/EOFZ2;嶑, }t&&\5u17\I@ 5O? ʴ(aPqP-K4<'f %VO"d.wEр%}5zWˬQOS)ZbF p$^(2JцQImuzhpyXڈ2ͤh}/[g1ieQ*-=hiך5J))OrT 9KaͶ{߰+ednU$YD',jߎmc\cN#0"",tw>]rύW -a]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,D5I9qGGIi'ޗX7w{:.B)ƸXM3“QLL FN+\r]IrfWoۢ,"~BRlnbu;Bx`C:gev\g7 6܋ hH+P5co.Q/cC@.sABC{تI=;̶n2¦l:d9 ΋pydqZrS6U A~@Ve Ȇ*d96 FuQ==ƈkmb]/sl pVpvUEM.wtZ6]( 1aVf~xN>/!~x]y7D7@u邗`unn_ư-a9tz*෮9E=զ,i:xI_ˌvg&*5F>#q * CƂ lu" yo6"3껝I~flQ~y'H/jh7hM=~ ֟DɈP͐b7j{ *bmc`  SgkmOl7^~xAE,Pmqs;l};Щ۸l?28Ć%ZU.]5`s=r&v2FaUM 6E]_vE P / څZg`9r| 5G;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9Ay@/ڠ --i!M5mjozEƨ||Yt,=d#uЇ > l]չoݴmqV".lCqBѷ /![auPmpnEjus]2{2#b'$?T3{k>h+@]*pp桸]%nĴFԨlu |VXnq#r:kg_Q1,MNi˰ 7#`VCpᇽmpM+tWuk0 q } 5 ¶]fXEj@5JcU_b@JS`wYmJ gEk2'0/> unKs^C6B WEt7M'#|kf1:X l]ABC {kanW{ 6%#mϸ.6p5k0C5PdKB g~E#zmxfU S 3_`wRY}@ŹBz²?mК/mm}m"Gy4dl\)cb<>O0BďJrDd\TDFMEr~q#i}$y3.*j) qQa% |`bEۈ8S 95JͩA3ީXfnOFg㧤[Lo)[fLPBRB+x7{ͽxR?PQfem9/dejOg7eYX/Ʈ$IY&\dЕޝ{:MC@o].`c3c!I|pFEQG9$Z'&\tw$ڨAM-ݯ87I,[[(=.+>n` *3UP0Sp8:>m(Zx ,c|!0߽a`W%ATevoYFF"4En.OY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXoa%ĉUHSR0=Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9ƉwaB4s=Oi$LXIG zPzMD{]4ü Q̦ Q^Ղu ;` .Тr yFBQ#C`Jyn,m93B%Z~O/_BKCQϰԨ\uRT{/;|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?ɧonxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?~VX–8&w@a`@/t[Edso\wz|In;3&'v]gخO)0{ zz2?{WƱPz;| \;_D[T/BI GH8@"t*"9Cz%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|y,F9$^@mdH֙toN1 < ҷBq/ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDf?U7_nMBLϸY&0Ro6Qžl+nݷ" 㬙g|ӱFB@qNx^eCSW3\ZSA !c/!b"'9k I S2=bgj쯏W?=`}H0--VV#YmKW^[?R$+ +cU )?wW@!j-gw2ŝl1!iWҊ?b*bj^Tc?m%3-$h`EbDC;.j0X1dR? ^}Ծե4NI ܓR{Omu/~+^K9>lIxpI"wS S 'MV+Z:H2d,P4J8 L72?og1>b$]ObsKx̊y`bE&>XYs䀚EƂ@K?n>lhTm' nܡvO+0fqf٠r,$/Zt-1-dė}2Or@3?]^ʧM <mBɃkQ }^an.Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞ}xTL̵ F8ՅX/!gqwߑZȖF 3U>gCCY Hsc`% s8,A_R$קQM17h\EL#w@>omJ/ŵ_iݼGw eIJipFrO{uqy/]c 2ėi_e}L~5&lҬt񗽐0/λL[H* JzeMlTr &|R 2ӗh$cdk?vy̦7]Ạ8ph?z]W_MqKJ> QA^"nYG0_8`N 7{Puٽ/}3ymGqF8RŔ.MMWrO»HzC7ݴLLƓxxi2mW4*@`tF)Ċwml"Ms>\΋"?|NKfֱn !4Vڄ`[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁ+mY/%x c/vћ d7%yl_U$%ˎULÛD]*VW]%h$"V:GIXEeOzJd z$MbJ4_3?V5YƏUX<*xGy)GiUQJspv)V1*VuQVTqV^$_~cTn0D̏V D$Z'e0VXu0[oYXM_`|:3c`!lb%dj/q'mw~=MEiк\](a pQ0RL'/X/@%KLZB[[Kxt<ٞck9?r0]douM{r4 7NQĝ '2hDZ,.q]*T_sП?pcl Cm\G C{R3/f1oOO2k-'Qfu WwsX(cv眆 >nsrNY;C9MKGYvIY䘆Y۠}y.H H iH7=@, ֕\`*j^{ i  1HRw)+qWw pbOn@ΠEuWs@4$ e-Яq{>`@΀N[Y DR#K򘶓y趌3`HeG(`v%s ص#}tG[srKñy9`W,I@ny]49*眛v8IƑ8ϸ8(*u @MNYڝbr|}GY{>n9p$JpgSD,{QgkkIe|ny)!hP??3O98^TMߏ*KpV:'ALr1= K {V-I_#U"JS ~`Oȅ$ӏ`ٹ4f-L165iԄwYFc`"!d< \uvSɖSmѴ΁얖 u8ȕRWwKXͩpu?.Dx=~^\{HYh8v:g;.7~r$ Zի[x:Q@2jQ'[Y bIIvڄ9 aCn=tfs{ Pby|1` 9p XOgѢM$mOkc>"K"jq3n&B#v# MZ`\Ƀ5Yi`; Ȇew,9Mh ;T.pI%=(ϊQQ,9uiA*!2Esﻎ~NȄAX<rI4su7B7UKsn u^mX&]htܞC8EiRn?<˴aDv3186ۉ0qyj!L] iBI*MkYzduUL(nxb%iL GITh*Q)Gp@TٕHk/ dqI hcz4$R$YmJ(f*;ˉ,FY;:}Ǡq$:CD9}x0ˋ,U]xVg uqדC)+5 8@Uu|vm_ݶhOnn4Y4R%ьfFj۳+  [Ԣm))KDȕ7hc,o(] ;+#J,jKFe='mmsy{x!]]ؾdM$ZZSn`׶N#!/R<j^aU5n?\b JY2\ ڋ8Cg]KtdiRaVBI!n<8GO/lDs ]W\uQ+a}}; 2{1W`+.{E:$җ.o_ "j@4ӗ:bxS{Q޿DVԻv`c%TeaΆnJU:ީ)T};<b3&qMT)@LՖ7Ķ`Rcs3޶JHBl2y:G 4²\LȢAXk+([nl]II4sPxV ):\,WBEC+@6WE %QXb/hOReԛ(~;MOzǓ+oAC#K  Ւc[ NȂHZK^Kl9/ZKZi:((&o<@ђ՛0Hr{_2Z,j vtDY= =4֖ձZFw)FUMRlˇJ"`1i1uI cy4TDjZO?T93Uʧ[@̯ik`ٖ uS2Y u$inDmYzD-Rye> DrDխY-h;m\&Tn먖BYWփlrh u=솗 C؄NՓr [x{A=(Wg/yv )&6W (h=maq7^'a]IJ]at:i2iR"&Z1WWX~,DzORDC=hl,%F$eI5hO6Zu̝7Ƕ6/A421DV~U3>֌]{)MMqyAݧLVSJ<}TF{ڞv5D^% Em5[ɱᛧ0<=~>LO22Ȉ24Md4Oy֟nsVmגdE,K2<A1sۃΣݣѭ)]Yʜ cb0,L[:9r5ôBOtCfk8{fF8z4^xҐFF& j.e-"!Y6 sK"I@9@᪦*ݦs#NSe\UǣCJƷ[OѢ$)iSࢆ(h`,,%ظnS{[4tK*nr4zZ_zmʖFL~?kb&ՌZJΔl ݔ Dɮ2 gzzxu-A۞v+ :@ BKov˴TtGb xዬ聡ܦRAkb.YDAtp^;WeG{o35!FA#, vƳM=wvrVBEK$tm50ғ(w1陡Lu= CJK!39m@@;f.U*3dKyr$dM7k:kLꪚR$zܴ  |+<*.[26_A,AJYGgD=Q$a])pE"փü\!j*˳8[~5Hs5M #G$?FAh-}5e*0@}fWJkg?2ù$l[f(l}fLxAp\⍜}k:݇9GxnѦ}b:ieC -ZMb1kObǏ&r)#HTѶ/ S~x3_ ~ = K 9DOUus_&B#o@ H?z& 4!p&qd7`!Bvbeq}|`ti@?S5B!f@!o?F4BC ]ݷڷ0o a=؁> 8:@Tv̛NDAmӸ{DZ83醛<8SKmrxeVͦx֏50LC@^L}$+cOC0p?~**j[>M@JŢE6&?g1 a>cL aQ]e S4(~ TCa) [ͺ.iP=0 x9Kħ?m Mt}gѦ&ߺ=0to( aE2X4&vhPS?'"a)?a{Ϫ%Q _g޵0mk鿂 %Oގk'uk4ER6IԒsVe6wwQ}ˠ3bjkېwh$Np[o85f/-IDw€w*h?n`)s>-]8wdTi*>YJ^c49̧$2Ḋ'Sp>Z%ETwѴO1iZ=+zppc^_HX|G~N )$` D0t7.~5%KgJ{YŻ '6K^̧xx^{t!"~gAt(*b?}}vGq~MzaUkkKg-٬R6 t) ui]ȿ>'_sʬ1~70mbFD<'`1 9H`ͥn m *P.m8cw*{g N @.## (Ku BV7x7? >`/N{j_@q:@"fQ@(݁|SHc2l!iPPڼPS:B`x\*up|kш;h:;q(1E@Obg  q"m+x^mbtNPEcJ㜵E @t4.3[Xx'xŷ ,߯bиn5^h04hЭƞXxGPW+]ug'aV'ho&N{PiöDq!@<$v= |ԱvnGD^ۨbewZ_suX]w/n)rn.W:/j}-9=Zw`Akqt>wI]5ڢC/{F5^rS;7砎33kl_ʻ5 Gw5V-oKE?n^]cq {+g\s/Bvˇs+GB,U8]a[gǁUuI NU. smXIŻ5l;h{_h[K>u#Cn=2k!k6t4"VM4Z"_,Y¥K|%oTt+ :ǥ:!OjwGf_:LW^`O}u:XKsW`/ Pu%K!v3ߓK#^YmOAlcuGq(<4pG Z܆$UeW|VΠt4+.3d ҔphEpX[b$3w|E&s˨xЂ)xѵi2xS?deۨ&_O?gf,4nR4= I[u ӉTirCre j,rZSzu/˅*hIF*%vt"kMY `jR˴BеB)C.?ZQ2㚌O,ieZp緯s!L5"57:la9\ڃha d0F`!  D2 PDz`/2"QY_ȺQ/.fñmeE- `~v= _Nn"DEش('u7w7h 9 "RfxZ,a\g7i 2ţdqYR9,*$lZ>33hNo&=DZEW;r3ܲDzKdO{Z=nE!9jcO&נރZV<;pP<6l[F yG` =J۱ Oton.>8]b0eJҪ4 SӧW$]Fۆq Tp)W` GO./Oat:D[^d:^b;Lߊx2r$rY&&e/2AYPf^`+5@.&׍}$?iS5m%$ J~f~} 뇐ϼ)giYPaI'+!JKcUZUAv]Nz?s6,R5,лÃUam@>` VJV J6V@օ )YOfxU7Xe²X@v`D_9>FoV[d|ws9, ]N89`?pć||XXg7%>Kh:# -NEm8/7eh۞0CpSܼS yd_;ZѲx\2}rtD[[̳ǫ[[jSDcԷ}A0ʙlW#ksBgQ۔TlmP -BV`jKj'zO >/x6BdQ9ڞ𹺈odpO|*@h 3~$ Q[WI8CX[=4Y9IpYaP&1]7l9 ӁK&[bՠd{X%7p 4d|%x˹܁C"17gaDCg[ݳB8 8L͔NLR4 ]O@G*-j䏭0p4 |"i[m'I3-͋Vr*p۶tkzBlLWY\n6%u-U V$U: ]Nb{;) JI1_~_q<7eAe=QP+MviSFelUfe;jԲOiwֵv]VyZOAP{YP{A'DAuuv]Py:OAPwYPwAu'DAv]PyzOAPYPA'DA˂ ʟ'(|AeA v4x 6Klg,&K 5J<&D YUC(5&(Lcq۬ : e|>#u}x'R1zH}V\eZHj^#5Ĕv[!x_E ɗ_lT /5ﹴgQpw9aeex_ϡyf5 0-2%fʼ 2x]bRrݠh0?N'Gڃs'Z(].|ĭÛ\<,E1QP@ b!#HfX+ >L״,?Dx[4 1D<24[ncg9H@_f! Dw7tZ5Ktt~|YO-,i]jJd²Wdf ֳY5^qӃ=u6K牋[zmat}BmWs\KaA*]߁}*kt{;E lF]*m`ZtU$C28(E: ):PDyn7E*ߤ;1t\mY:J.R Oz,7?p]wRCQh]jM+Qu_)sdUQ0 ˺v3]&RZvǣ'y5]xI4zx:LR[5[[(\q6b9F[Y6β_ #Ž)sSIk5 |MHA{`c4Ck+o-HʦOdXQ LBk/xDOVeoNM"A6=TETX=zд:+M j!Eͨ6NoVn<1,;V0@\w_IVs L Jv~qY4xGtE]|= O)EP\[>dq?[Uy6S>-F@}p$RȬwG[%,ExsA5U"C9t5~R{ P? q]Gx54rie ?>ir%oDF |Og XXT S]-g[=Ҽ,iӰj6a h7\Ɗ>?"!iC;U>̰qƑ81˔ej,-%JTE Ѕ0lV1hEak-;yd1`_FJ+ϟ|ƕg@sV у\8h8C(M߫I VMd¿[V{r]ݏx {rv҄ Q}¾ǻ% ő'Qӆ{1M-!AYzcX%c͒`PDj&1Q1aYbN# v@WI:CUaxX(oX>(n` +4ole‰el me31f:juPYڔaEϒwZ>Q]/$MCkո<>±Z^w *k0X3{A%Y%dJdjaF΢yQL iaׁ̰԰]O2;@1m̎[LR3KQe_OT)K)J1!e97&C!x-Þc%uRKଏ)R`0CRCSܧkdHrI0SQ"$\B`1Þk[3cU&*1 -dYBRE+c"}ڙM&ю-萕V ܟ;OS!-gY3I3~lY#;)wIsccA)MU3%lJoKZ3Ixq4m}<T.30%X,AiHץ1S2 ˫KU|  (Ɔ5M y9ah!ӐXh%R!,# ÓFaxݾ88L۝pDv8>母ĭt롕"U&jds.4xlb81.pt% fl\]"C0g]3BI_8GsIcC Q2)݄5wXby^]W!oCWƊeQ|rܧ iZrT}:c,]/ސƞ*&sL{/6 /H.pE}"qp)ǥĽSe`V!8T2,K1>[~ݒr$0ڱ7k}Y! "gA6[KY$>.V -M r `HltnK"|EWq|pP(3Q1X-$@u^!B%-CAm_!Tqz'V.uc!zϴ@% +8nxa=fTÙ.(<8H,Nyo`,XUseė[v'AЙ ^&jL٦ٶ~Ӎ\Hl1ɫȢpclepQEw ~SVk%k_R6؂QfbpPρ1}0lM@\&i)4Kʎ交OJVFLBbqe`$8c0xg*H"~ggv5FP*f*)U)M8= ZҭJwUɝ~@VQξ©W!A0\We)+P%R~bz{O>n)3k= ZrÆP~P~A\!Xrm|M;gc+SȰ(Ij\vl/ []A6 p\|n֧k#!'y7)1|w8{Q̻Bsg S>#˹;}佶WU0;6_`eėU en0}1I0:[n$k%H47[382ci@ ܇ʦ3& j-K{)pt2W'J択޿/tNhJA[\Hc״[,*djTphɮ,c-]!Z _ c|chN' @*`>܅7IA0k/>ŲKKV˦R#w Wt$#Q-I22z!6f9LΝCs˨!4{9H>6U}(MGaY@Ra 6 Kd\bpJ8L%!,-V>Q (#+*uzqU@Iˤ sz/ FN81 ϣdz!qܙ\Gi+@.h-*YX83?*IBl%KtI\nHp5nR4]tLBGY^*QAetca ZkdA:b% Ǡbih`^cid\ Ukq𰄹m3~HD*aY' ۺ#FKͽx1O;n^, )s]o#C.j֫W5O~Ռ7GSyrN9o7z歓LjSKHPPX[A m?5ReA޳0b-hH ~pn)J ykiᙨbi&Xk>C5'muaِt5߿8Fs>t^ߓ%A뫺?Am+mH*e05+Y&X唆T`~ZױzG-nkI&PWOb >U>|.3^9~*޾~ ̷q Z_q {۶ !`{ۭ-!\"]c\#i?r1QVf|=3,ʏH2e-3g~y̅wT/ f! }l>A&ivCI)QQXDf::q@L \XO9t4 w)n9oi74jn0|C0 ~5:?'lK=?;C-|1=2E^X|DO !Y3 ]n: q&u`}Aa6L?XVN/ŽsTOX8Ҡ+4P (/:IJh#%2_d݁9b4N7g~M{dy4.2MGɨ9;PD%[Hԣ?[&UI[kʛ2/"EJSy txzh4ޠ7z쎞ocs6tx9 b7w~>?.9>ɥKZ?F[8!'7=zwdugKPp{;xchptcv}L/ɧ~Q`6 6s~t N $?JS Ԁ~OO&{я@{r:罣#67J5{Vc_ȏ}G%o?A6=}{4$W ?F^daDdQU k5^/,!1-5\s"vxi|F/ %`+l?iry kWLhphݟ1q҉BWE!b2Lt2dݜ{wK{ 5F$%r[Y(*SL +׬NB36fj 4*2_f=ĦQ1@bc'*'xܺ-ȕo1Xm{%9;.Y2|=yeطʲ%u%kah2 ~wF`zwhvqyec7Aͥ,#fzy9`&G,>*h#N\8Au b/º<8{Qx[Ƴ/o-Ůw!'׉SUW(XxyK",0y!-$1*bo3qȤw+U(_Ly44= 8BG֥f֟zy??쓢?}b*M}{,aK$a9lalmxR(q m^ gYi{ts|;o200r;xj= 'Q_fwx:]qu̿G +1T}o53 Sy9H{Ѷ]E!#!S QO`^/h \f@,ߛO$6mtV>vPyzTl z|q Eo>`!u!ZLuu9kr*U!YeiB2Bۣ*$W B+nJ$X-bhem0R6 rDjjҠ86_TVHRMP}Ҡ+زɃZ68J6ZQ*ݠ@- Z !$ @0&-[IP. چ:#xE jrq{rO3w1 \&3Y]O*֦'[Q"X\7"o4V'tP>.Fh .re4?>f~QTVGlR?*VnOxpKmG1F5B?05*͔kD;]`"S6 eӻ/~+-E٨IS E륲%2j^ W- -)A:c4ed/_$f*(o_咙W7OJAs8[gP^J5cf6Muo;=%ykR2)ǨE6A[ \]p狶sSѪ}3;M"$/lY!$ .' 6ĉsUhn, jwL`&!|8Z#% [.QVa-/ꖺ V5~+vX'zwL"GvD0Ֆ:4O!u"1KR^ox΢yEF%ں%*'CdH/l.Ɗaw.W-6anW|ma=؍?nѾl=UW<Zs "F L-KMbG'Z'wݡVߢWh1 ſNeh&a%¯47]Fv|U;8(>F3g;ɜk&aÅ\ y>'*(x8)y 8Pn\yQ r{7,At߸qϪ9z ~^e'GĶEYB$O}΃[9'~;RO O{rfO`q^)xTv@}<% QOCP(fQr,ا]Ge5s *h.X5c<-#ga^{!<'gwX$YL"x)z@+mGMoφ. .;M pԺy?K՟l#KVʇtRA6E2N0tH'RY ΰl\qgy!+Qlh.Qϟ-ff1X?]7`LCWڝ[ bl o^W%[Qj9TjL*h"D8IaDҢ8(;RDŽP ([[vTKv~Jͭ7hs `I*{֮V$XkJ\m*%Ci`iI,spN3bY[Zn Υn*R P"EcMT(6%+J,XH;0܎h:qڒQZQ{SD^IFvI7|oدFC8AO ֫nس؟-(ľM܄L$FCՙȥ{^ٌ5VgWqD)^Y1+]l׿3EX̺Tbʱՙ 3o6j>pJ4z0_Ųh1ZVϧJ.ݾ&ė%|d{Vtd<{Kg*Dd4*sfML3 OF|se˽h%ShuJU./ϛH5ZV!ެX|q^$+Rgǫ3툥 Rb!PN 3ډ!gQ™5Jԥ$QqVINj޿6_ja ;ɒQ${hը-NdLV[l(e) oC1B+bS0}ca*[) QtK),j#VGqLe tɼd궨u&zIǬ"i*cL".ձ6ca@054ARRp X-M:Yg\bՕa6P|By|ړVIl|iPDH4VIMVF8Zn5ZYcn,3)aQUsc9cI(#+D9 v\#-^v7j;0*ڤ`";zVc4eW3,kHB34(I8MSN3+ᐍy0)J~tPY&4M aZ3!MBD"e]@&bi5$,MuH U,6gk:A+-ڿ X Kl% I,kK>jD1uGӉC85T%M7T.1/U~P!@@-2J_/פVv)sFx2w(|w9q'*uܙ /O/?=@x~z3G7p+={o1г~'ɻ |䚥3~J#EN ˜xTﲳkRD%8m|KUO3p4(8H qH]FhB'gb2)w&6#=;+x[ )ljB'&c)Q65Mn àWR|ID4R֚~9E,uV$[,BRXA XcBI+k6܀>ء$a5$_mI;4RAHR7 \$wi"K(9u!)%[4)QA]\<;;qPmq gDER ^E L` $5;R#5{dTGV({w+JB!pNE2 Ǡ"hpЇ-Q"ZVHoà>$h# QA52+*TZ8I&8]+ۥG#I>FzрqHI|RPhN.OY$pYeľ h&UACkl-iVu`8 Kf;ɵ݁czi5rUgvU.0$CBvǃFj8O4o)4f\PO.s)aا.<:V.(>2J'O, 7l L_+MFT؟ombH3D$0Wg1DZi*4)!良]{xz2g笩e} ^[֧t@ LP, o66\B0 j-3 \x[V]Y/__ӻLE&n\f  i ٓGR*}RhU"ܐlD2$T"XѶOT"W|3{R+AW%j#ۣfy]rϫ8 -m$UxjоbVZ#{fme]N<}9|sTZ;yϤ⬔%館>}=ޢ){Ix8~a*{ݏ{:̦a3A">_5h#{qόeǺ{i;j 9@1X% e16Qz} 89_ ;FC=&-}Wu8DIꔂHI~տ'1-͸ŁQ3m$fmƇRb`l)=90D%# 90r82FaV \LzbǷe蠯dYfZzKmҭQ݈;mu00/j\7<%m-c`"G7v3oyqm=5CJph*^lYd~pQ7; GZnϴ5yRV-'֔C|`-g1qO-gerG{//4xvO'5^>h+rH$**l[3ʷ]`[^F%~WVEA뛚^H.gfZ*%dZ; ӲȞɗDqɐ:2&)rgߟA,oߟ»DAX f2+P"Tp=Ǔd^aB ܭL+(t>?/ΠӬUC=Q>ꌩ*sh&7,6=vifަKִ9#Jϳo1y6T9i98|Hg b6r}pH'!ĉZirQ5>.6p9k"{-|%Q+a_ev6rI]Ua/a8Yw,K647d:0+y2`jh 9-6I\L\9݄%] ðMuWf- RR5AmKsP qFlϤdpKo9xOm]%8͜0 f/~O0: U\N o@~ @ddu $ 8\LqtMBƗ*]mJ@uGjVSc^^NA@NaG!-Ҝ?ܜYՐ-f!m9$GAx `~\?lSy|%cZ#{2Xn(ȻP`U&Z{&7 >pƭ?l}2Rh:(sZ;D=DžƏQmc7~QsJ j'LQf3W #yԪߠ`l[7U"YzJsv+ wiQƹ=:}܃'a5UwT$WLBqk‰2ʰs,C!ߊ0D:ࢯ%:m()ݵXcyŧHo#`e %63 nQ~eg4d!8oV^^Ƅ<)@0  R)wBQ*%Ru| Fׯd/1+CO%Cu2MFX]GA~302".ϾR qzY@J%"lz3J*N%βxh;[?~ O<{>!H:b4W &xYrΘk&p #~7E?HC{|~![~M&|GRH U:|ْ_tX0ik1VZ갌?&?|w66:3a<*+!gQ!OPKYF4T$ "Y]ɢXH{SKV5[5UGdMDRLy 8Q B$`0q1BSLwUi orp{@da?-)Ǣ H)% ^qK_ע_(s\%Q ȐTO>MRAl >Zz\ir*eh!,J ~#,Gy)=ebIvWk.%N˶oVpH(? x*!4)"d[k7IzY-O'0)JW{ UZÖ; :f,g_S {25kkAdyF$f9eX3yUӿ~fa:U^SVjvw }d&Ӽ]_pdzjb@ׇ<hsVFa:8 i@,ũ݇?~Bx|Y f0u!ާMyl;[lS١OV80IR:7k_unߞOVI_ҫI54 Cn 13XOhHy\$ iY1XĉQʈjYhOFKr/pȐxԀ{c4g!K;M ,{r~0=ϲ)?͠_r9E i:x jѡ<迕h8 5\V)hrIdNj,'++YnipE.n^Eɂ<_lYpvd]2x45Y(gy6/8tx))zRep c"SWժ^,bNj"6Rx"b^'tgl*t ^=(o1v '[MbegE #㳿>&8>Rb̧ cnCH"eJ&D$J3ӈZcL(Jg|Vm Z5|O dAUº_ vud$L vö0liUi^[=6klM_u9ҞKiŮyh>M6Yq+{T2kR3ׯ 36&'=0Q.K~n&CPA 0zQϣgCΞaI:aJvcQ[yɀڐg#ϐo t_: K]AWBKk|eB2D":<$R$a U[kdm$I6i} ЃE;y*EiHZ>~#y((ЋaIUQqIj8NO@!NVIGkƉ\U)~xwrZJB?kbg51G%7ib Xn`` {J CP8R&ջM~3C阒4;1+#Zy tod>>E UvKW_ܟX*pIGer:iů~|!s'7B_67gV1UF??bM1f5%϶s s ss s s s dDjsXOC_~i9+u}PèX<Z~Y _N1<Sl~jFKJI6hnzvo{^?܎؜ כa2>CR~/+JQKUkMi!OVqP en<5i7ePf<п]fRLk.`9hMCxH/6cޠxfR4qO^^][_ 3JyjBȋnKez]zoO{,W?r0 6BjViGs5W䘎+8A998nGwt٨m>OFA4ۘa*C WC qfrPrmZvQ?TX_foIa?vzÇp( ְmP=w )>_Iim9TM0'P CuU36`7T1& _3` ov`^ jCרGu! 71ҁNyzy5CxY۟E-{mMlΉx+ཙ7rZ e1G{IY"1|(:>-%{Z$|sWd!w&<^o}Z($KpΨup"'V{J1bl:8_DZѾhSη(%,`<# ( ɹ8cqrfAo@R;&G# ;?oG:8';Bh5\6DE _Ar`@} >I2#U@&k\ ]H"e:%ѦQL'3fu8db>  YWD0uVcjOAO{?s3?Aװ %F tؓG8a$7TNG򏟮Qcp"N2+1$hIG#8G[܌e{"ѓa=wN[QRbIBS+e OjqCIt&[?(rwਙ|{iҞDڡXN;߅k?!qὊ$+`3kQ|8qI٣ErPO(GWA롺И!JyAdԊ-NF˙bY=1{HJ@ί^AmpJ5 ,‰YԶ6QCܔfQȸf1ٷhϷy!dR_-OWqoS8PŲ.w;QϋzWWa+TyvEx+DZyӸPpM+rrAiE},mb\J~ŠQJF#!8?Z m$L3 `cn 7Cp r>b<5~1kt99D $kώ<칧*fY8TU3,B&$sэ&E-ղQVbed"=e@kS9E]@*:ޑ-gYf"ܭ?4[V$H- րi42FZ߈um7 |(ixѵP捓'h+aԘ,wӕK4߿fA6]qHJhmZqQNMh_B[Y`cf1Y)pīa\jHduV8ba#x]VE${XG[#C=ҨYZGN'v8$"9| Oضk&qEm/*303 tF ?T 0MmZUfj/e&B{N8(Băn(8figJ]KGg}U1`q:e@1m# \$h X6QkiU'R<ӒDhQ:/;N/fi8&TԭQckid[XZE}z}.TqР4JY.\ۀ~'ڜs`^1C:bviIچ&E+\VUb"ZJ VGRӱm\H-} Btq%*gs5F#'buYY5f'NYsbl^z$&e~}U!**\${g@<4!DkI:.$b >#&<$?HoWRD&@O%wأmlX.}7 ` Nc淟8CRڢ]Q 1]ML jn2z»:/z$QbDҋ3xx b? Q]Ac_ =1PeKx 7>-ǡ;#ކץ(7y ,Q"$ȬHJ;ɜV6 >P=d,u݁bR [Ɏ:EY>bMLC8iPMyMT= |$8ֈ=ڹ($w㩐+s7ϫKF}H;QJz0 ?[(#jn:>frQjIД 9 PM# p.$+ IJB%"m9IU1b٦[78Պ9^cA7uX*Lqrʎ'dMJT:KX8sWʱm|XJ1Kg/^׋j|_ 0Gڸjc3ƠB۠2آ*q˒ʤF‰:-,;Ҍ1C9F Džn2k7T9XAiɧEGyLDU -øg'fP$(fUڞ~<#V=(d7WW`z % T9DŶVZFR&BGJa3 iحBy<#hgP+Y_fl\%Q_.r7|TMa%N_U'V- ^+4| ӡY> 83]Bjԅ.o!oܕ]̈6Yv>>RxE,mlڨ̐5g~<&ǗZh[ t9`tn:{$S )`\:J ۔iS= |(v _ E[`+lGJjP=gjquZyXQ.1u%6%MN]4$ҠJ(2'X'bv>1k)q}kmeAz櫝8kOf!=؂xthm|gfn0"%UfRZ$agb)v8$G&9 |(GBkz)xcl0bL=*-18*.vLjX_q non` l̘<ٽ钎"vvZ~#`͚Y4Qqq& a<6L83V?Rq#_ `qMI % TejREURwK¬n>̂N ~o?wGŞV' s#l'Wa;" Tu΀blL +vG͉篖m(%Lg ZddA>0 GۀB8f2w=\;f6yɕqi&Ϥ#أA$WrzLuF%>v8I>]+Vo{(>|8*.7@a Q0zy5+ҨʴO˴k7.YrbB;pTw'\R#J{wJumF-۪pЂ/TՌpLLcFxAyXU XGiC}N-3K{_q;k2kruN8~ѫVq9[,<*qA7{E] -ShXωqF,]YO8O7Yk4Nէѷ rn/M-p¥Ɂd j81CiI:pYD Uc{T#JQs޽z쬏K ʽ݁f5U\ܧ5<5zm+s$}_DvuTiw}jܺQka7|\vs]jaVnHעٶ~W_:y tS!Ԅ w=1GPʧӔ>ۣKLzO6%"30KLǞ=Z$ǁ$99r#р6ho hْ=NFq5{]~s?[WnW;r?+yxdVr!kҮKN!>Sa9'l6!Nz9@r;CJmxU|uu†r:]rq'vw0+֓Zc+mcS~kݾ@ ".^ Lb" i$ʠBănBaV.|bwGH4d3Ң1bS$d8Іf'i{IJrͪ GUj6,5<|%s:+NfQ;֎JTDve֖;hjvQp?c D3Ʉ A0V2llpa{Ӑm'}rQCiCnfOy~sf3O7#GUi0傮(shj6ӧ|.d)l؍_OKrtA΢[IkYw ).3(MxX .&yn]B4fuȣYC\HVQo"|>sdUq$Ĭך 'L)?Yn`JO=a(?$.Wd%,/cC2&4*O9:ޏA{ƆBEG;a=ֈa8\h[AҘtu* 5TL}&FjKhp"B$?=1CIު$n~qݡl'0Op(NQ_a35˶hz;~[GԜ14刮$YQՎX}i%jW*0ۺ% |(]=-&]KHr+ ]숍1s'?Tus&[$Ԓ* `y0zdfe~i>Ky!U9*|uظZx 1"x2)'1YXT*`_( u &TB5W+sRR4'zUNLAG-<1 DN_ #$iVq(!mϑ5?{vufxJ9 9i D GT!I ;BFp|(˭y(!afd&Y -|i` Ux I 'ew j2VD"29AK[nPH.]Su;+ Wb6`YP=s w @6eosgzzXnAѲ:|V붢GEQQ11u+>"GL#' i so9:Kеp~Yۅ8nE'Qj$޻Nq3l=9k/1>[S}|Nǝp;uz>U-# /<8;I5 >>rJoRQ`Qג;xHp ed`j]]>Npb~&tY!x_- FSiMb3eguݭЊ:9t40lҹ~ p415>تHu aۀ۶Q WwЪF}۱BܦY$"tx EQ="hTkQ!@#_AErycwt@3/O=bݐMWb'dȈm" 2 k>Ozwsq<]v8 )䔞A'/84Aϡr0f^ٞK*NB[ $(эbS#x]|}-b ǹGj X("E;{a֟P5 \yer$SY?! Ҏc<}Ι|1CDz۶ZA/d)LL"=T}K;4=ZHqZoM(Ig@.L<iz 6ɈJ2:%=q*FSd䃲@mʪeWQ7/4Xb w [F{By0]kG t"s:hB*fuyc))p*R^k ?>nAj~*ozEA-XM#E\_x-AUiW[}J[ў]՛7oujSڪ?|Fp|}ˑ N.< NTp"U򀼞">̼/\Wu^vY/2U/fS0EGD8om( s R\0etzmT:PQEy? (f?.&~[$kU=r۟aViyVon=[ƽrGEYqrV>Yl0:[YցGtH_JN|}JTs4q|jR"_!۟ev=dx=iLD&9ix|NZ[ BVg?IÂg )y8rLmׇYl:G]AZ *tGRxQ1,q*ruSdģ- ! B $Ȉ%$.2>8; ؎!!dߨ$WY VL8xE( /(/:~~˕<֓lcl8IQ;'{T_M&գTt8zHq!4"$ΠD!?c~q zTsN~q2:#K8?yT} L,X 4L!eV7Vя; BQlPL)r]%lӦnd+T4#e,T6R'p !t+}!eƵ59X s7{B L&v%K z.۵Ot>k/ LkL eB3Ĥ R8@/1o80PN `!n ?iZٮS}`cMgG}tWrTk1B1VtmsϛKF+yhTT jSrt9$/N~&t nhEF>E@@%&rz&,y>QWo7|,߃Ila-Ҟ3+l=zѳ{:L}(&ͯVL n1F V);fx7.GlLBYW Pwj6oO`㥢3 P㥚Ƌf>|7Oc BS8^ꈓ@B),[gV5cU!#i Rǘᅧ:yk4FV1ek+1 =5D1wC? 8گ3G21cpasF\sm3[;z[CC8]U0}DS 6]hovN!"J}.ݖ y!OE%8ʵd \S?:wȑ\'o󼪳bnMNޏ NP\4zVe$O~-KY\SgwA<2| <7Qm3ٌ"3enP}毥nl]P_,sMSaS3hOA?iItsY01enM|6)<닜rP .Lɕ[ah^7<Ӓii_rXRKϻʜ87:\S{q 1,ebճ3>ygcȾ&:ePbzX+ِ_$޶O%y<0ܩ あRQ`Q !%A'86,<\$0%7;<_Qsvbh<@ʓuSa8A5TLuxfi|/ +:m1*Ƶ} ;arR2oC79h r$<e"3SAL9rZ8]Hd:aSI>vU1;VUNg]}npPoC9HJk Ϯk-1ŭ LJ^*\Oej:0{3M98c¹.}ˇa$j<(yX NTDy;e`9,2>d Eج`$؟D]n#$;73|Sl8=su< ՗Er?6ϳUWƬ rl~XXŗpҽ.l骮'E|Hp{-\;xnx $u oÝc8!Q_}zrGaG%ęR(3FȌ4n1,kNYM qAX\2B$]4j…5tYz^ovnrp lվWkU]K-f|[/~73ư|~l<7_^_W1|a+/>̶-C}|돋yǶ7EK7[0Cẋ2 $Z8Eٗe`p4?jeU8>8; O5W$7:cXs@ 1U=;VKc\Ϳ?g(|)ee֊9,(RSsa%cEcܿ֔2dMXL{n,d^]걽 nl!!K`c`h6LRf|edn+w<!6u'tV*m381PC`딢Bk`PaH9`78{YuKvu=}XoR&{ŶNܝ6< $,3>^`pr #̄)h&="q`Ncѿy=َ#7Rbz krXY%$[= trৠtd|k~8vVYwu* Vtv|aE! eKB2xzڤ rwhPv7, .:323RpI%˒`,r4Rm'o>#'\f w^.%hөYC.B)բ/E@|.Π75-/, LpaI\_|d~'Ei܍26%>mڔAV5( XLM>:΂ʲL\|og. 2T6H7tN(;i ԃ{z{RAݑ |֨Ǡga'O*uw:|Pgd31i);EX'6*BN*lQRaRȶ_MNɲ; l cA^v|\J&AY%uE#8s2.b/+`mDQ`(6A1'9[LN"EV:Pj+&<94>>%)_m7Q h1(*ȣ 8TS.toc ^ssL\RHȼI ` 2*_mw pC)lc *J*ڎί>{<@n( $Bm*vSu00fj⛑.h1O?7d?Pc:-ļVQk3K >z]5G@y)i9I:?GϋBH.Kl8{^/wד`э(As(ј>2*C5"ʭ@ɷ}Y/H> 8>2ILq(3w$vOǡ 8};ns.&9Rl1QsZ9:AZ2:*F~lj *)3?R  Yzu!EtvzG1|c榔Z4-ݥ-FFBmGA)={_P"AImTt: majX`7(pzu^u`f& Bf@f_Y޲ _˂`( 6ĨGQ$uɑ 2!"d28W-m(R(5U 0H6ETl-Ĉ$Ka{IzI 0p A**AH TmE`"+>0CX+BT a!X]oG\0!-.U ~[0b@?EX`?cX.⪰.1PJኩE} >7 ~|k#5ۨru2'2 #J5eOHT tumL~$[oAsL{p|SL3=XpwQfZ,NK?V7ΎaR;"a֒|dȈ"(d@0(SëۂIy^^*}Nث}ЊvI;JJ]I;1Y-u?z][V,+WV:P8)EVS=Yޖe`(5HVR1ckDQLJȹ`KY_ \ <۠ _zPa`ѯn 1^#dCi#RTzTo&k^!@2jMcpL)MV/;c(v`$hlV)?"+B1j=<-A;xk0Fŵ<@.kw"1F-kAfʊyBj]\YGk9|XJnq2>+Vm,֛%L06LNgXK{G2YujQu句Y%V.GAk5[_ׂCZ6F2E/wKfɸXL‰7#Tuf~ ^osŶ vWoS"@UKx=jLm _dZ"~|^]cgcb6x<Ѩ,{&Ta:K1 ] U9"Lb1ŲfJ\-2 8m"-9}~ݎsGuufW2FEQG9] u2a˗/` 83 1macHBiPuI~:9#4O#C*bj@\=dΡ]Ŭ$e6c>=qQLTLymn d^~?mU>}>,UBe i岌dJG]=9^c7wlC)}$~A55?ߞ4,ջO&B<4F¥/nwT}5{YEA6pD a,jcC3xgďiyƁ|<}T5g*8N^x,!|}h5jpyŢ&vsplxlNjx{nB>h}Mv;ˊ h9`0"HH"\$|Q@ouVC6FppNIgD|slI37.?l mUjݹ`uƐGE*zx4^W5UGrwI\vhp%MEpw0C%K'A(CԸGhܔH0tT*> !̷pϱDh<| 8?QZص]2G0cD[B JE '4K1G2[(G~`)ZĨH?e#@\KkƄL?~GfdVō7gj2-cק6ZYpkҔvb1k?>-iVvzO4yhNVtQN}"i\篯Y8JtUkEth>ivfk7zu[ )@˩]>é^OC9()I74UK]"w_yL~8b(8'YMpE7N+j]t?8=/ d$P||݅_z]xcW}?k?;.w8YMYhJg݁nt\8X}QwB\܍+"5w7ֳoC{88>jg=H_m>u Jӻ]%)_ş~*Hտq9;եrm?$8hG~s߲^MUшY/&s{C F3EUYM$v4Q,yQ&}!XBk!?}y'O,i6Y\<݊秿cs#)y;9vO2}'f74nf$P*J]io#G+p>4vفYamYyH(&)Y(,)rb1^T/#"#i#RREAk-)Dcsr"kZ|͟;Sxs0S>+sl]s.<_ʑ2.ڹtNrd~'J}14!EbBJѭr Ư7f-'g vYɻcCvSk\WEJ uNJPC>:K=y  8!MD Ёw X9L>d2ich aY8a),=b?>+@*! % VbbR!.fxZɈלpj%CVȓꡀrty։UR >EžD>Vpsc*-iB,B޷U!5neej/s_>yT[\ܫ Iq? X(ȍ4Xjb,@lJDh S/"t#mtY!\L ut,ik#Q0tX~zQpm8^gFp#[ xc! 'F 6:֏'߮-H`DiT[ ~Gl`揄nC 5 v wy.ͲRN_:RJ9LCа#&ʟ:BE)PQB5nwX[lmY\R7HصG"#pN\8V )S>+`xew Q?gBùLx RUx$^HzrC_e2ra);W/0SZi!o|۱*=vgL\OR9r|V;ׂ&CEH"8'H+Ǽ|yYLc Ctŕn n!Owt sK7̷nѱrJ',ɧ['*XnA'K7(n& gG,<8\ aOwӻO9v#mtIAmY.]C%)Q\56XlE B\JX sU+GLŜEN*)?bh4 ɜ(MtU—z$3fCLzyݑݭ2L1/B),-"񸃩ESh"sPA%sictte08Fa׳/ 49I D<aJ^\~c3_O-?Yq_GR92 _u!"T(T2v:̦W\߼{{sV{v~ggaujt 9?\[Km~}G:IYvZXiϥGI2 |%+iX7 ;[|Pqle ߌA ?Z[x6xAF|gv\||~V1b:8?'T\Y"ee`rxK< n:ؾ"qp/BLETld ,Ss>Ss$ 9 UtVJ\P2\ ERj!x_>CB9OϑP<%I>CB d]|Fw EK#JEې+TQќ:E=*GB )z0s$/p.'s Eof!)>#s %KSM%o.o,xMdqDZgH(~3UH99V# Oki/ǰᮂF[rMG%+A\UH{d/Hem܊ e8!})t2/柯ѻ ];<sGnC՗/zn\J{#cdز- t/CB'0&#mr6ϐP,ISR|4gk!xbOϑP< z9eUgH(^(YڎB|%k`zTJ8q6YJg٠ +%s[%~I<hu"m V|8^U!iOϑPBs 6oPQ}Jj6?>`Ur8FZT<]kyY Cka Ê>YJ? Gc9Dm_[RcXe<tDȼ 2&Z&,{}פtBև7)kǑL,Vٕ6-+|o=Ĵ.mE,(. i a[Or$\ҡiőԆ!C6HU 8N4ϐP`ţr(= >GB R1WFV"57уP ,o}+GB Lil!xMbrB= >GBx`eDq2+19)5M3$oT9#x5)}ЪH(F):46T>V| 8@M%M7Q|੡^)V8:+" D°֐(GB ɩU|F|NH(0љ}}k'|~*}*`VQp`Q8UC&*-{aV@מD5=21'bcf2Rñ$vFˉu7Ј.z;vVRM}P7NI2ןD)+WRj ̽aBR{[|XTqE[~;؎l#rj6C3$؇JK$1B]JL6!%UQq%GBM!zшt6;Ȍ fGa!3GB J֊28(T"DCC-GB >4Xb2K#QE$GByrvH(^kOXw8]l8C!>'CulEW/ۉ>_S $ʼn A(c:fl8 Q&r"#s(v"Fk"*SX1ݎOy4\{z>wĹy $ubJ t/ rK1c2,o+Qܠ\c)5i#JH}Y :[rC=U.|pIv}4}fI6۞&B).E2q>L;@($?Ͼywvt{͉ =p$G F0 *\y'| (ۘ6.rI/QS)c U1X"O0@*Z QHµJGZ5E_T@a "msri}T&0uOOkUh+y!f)Cwf_04ɗ^ե[^XǕaRmo@re XxbbB,VJX$p`^P8 :TBDriuӨAn9b^Fl{GCh w4p1ӱ;zGCh Ғ1ٽAt 2G@Z"8ZЗEO^'<=&g?LiO}yӞM}y/PŒd}EF}EQ_ѨhW4z&FAC}>u}s_s_SE[1&Vv1g1쿾 vɇkpv}&oq=v08}> 5L% &Ik ۔9_nBy͐Xdmhh6~9 F((WE`15@uCo}¯Uy ?A1h_6`YJ"02)eo|pASvB ?f"$ټFQ6W/\&ЎFIa' 1<.Ɵy9XXw[cwp7wq>XQj cs;Ms>&'^{v(?O<)qC.UgC,4eF3+I`3|cI}7<y<:}/S-i` o^ۖ BT`ےXAR{Ifо{3`n_Urk1OwPvQnEAkWzkW@u^.iKkϴZNA-FA$=RdBˍ s9rK 7|c=:t֧ӝmMJZn;֯:.ގ񂾽5OFj JΒWW? r *vD3?p>LtJ-#Ef **Xk, J~~\ !hvW)`/ ^xvl{oofVo;cpNbn/HFɭvPE˚wIa\?+ɮI.Lƅop2^,Y?9/]$Lve.H9^WђhuĹϒ}?Q2"HpB=&!h)6|ޘysQ6֑t Z&I4S!;Uf " ϗMrWoӑǣo rT]zW&5crwKm97lZ3Zs0훟DO˃;ONymLkvV.̣.)\ի6+(cqˣ0{@ڳsXA)AkXMrޗv_pLu!:>C.M%DF8z!Ǡ nLWXqNr>~\֜')8Iy/8)? '<_!S޳jđXt5*9Z{{v{hӚ܆4!pF6peIU\6<)onQcsIkL xRJ,q(J .`٤Y4F10.?YE8%ȐCs|:; fdU7)n'O{ ]$%@5*kq: F##qN x@*'a|#t",jo{LrI 6IQʽpp-1c<!_/nV0X:f6{`ak@d_6n"q4n*3ײ.YW7s A( fjD3+L<# :9AB:1;!oeuY"'Rr,`Efj%1 ހ$xAy4)_*9Cer)>M&Z xXa-]$ix)Lxʀ=єk2k3flK(ӰښҡiO{AzpA:$} a K8o mNİjO4ֿ%ek}'FK@g<\}hpQR`,c7fꀟ>ߞ~u~w߾O.⤴mB<4 nuݻVX]CUT]kzYݯ@Sno7UU8W,#`MV[rVN#6%q3E3<9ppH$ D 2Jδdyp+47k~]I4ʜ? 5$ޟXxk#ϔmѳ>p#T'쑪6o=ka *ff\dسHO=:ǛW_Fj7]_U1ӪvB @")jY!;O43)2xyeRH)%ŤC U!3Ox:N^ְf+ ^Ϭ؛T(|#N{,zkj//r.)]-`Tے汚˅Oj/X)^@R(Q|Εّ;g4)H \ͧBGArweHf,|3#n8Hzv5_ 5/s㕛7;}=!ܫܻEy;U:yy̻byrHϱ˱vӸ?R,5dRwhS.ΒM]J˛j|Ѳr >tj7_&ٲx$P8hzٽ -Q oQx~b[|ޜ:뜼VD<_*4ɥT.<̕I<j+Ϥݖv2"V@>u)E Qt{rxԥ\ovM(k-0^+Pì@u%Ƴӝ38@n-A(y3J!OQluY'@6)h^ܓ9e`3el%UX#l15He)~`q/0Gܚ\N.ҺRE Lid4τy[L|:-*P1 B2呄*)Yi*uB-^UFsŃnl35G|o+<;>Vkc}Xߧ Wo 8\םG GzqK!̬t^e33%? @HrD_@G`$}0qL}0qTL!G+Vh>ZVhg ¼^݇C}8t݇C݇CнC?7[O MSCm,v zHi0MQL 3Glmj/jk bk]b> c`.mrW=~m&'Og=.뢰H 8pNR*'a|#t"Q/ #Ym(@8F3-ˌ2*o/;OIȱ#_>pv_'\ H6H_ܖh<\=NxgZv7].˦_ U]v~BI4c>P+*gV!!xFlҀ34!tXR;Z \CY"'Rr, Ҍ[$D/("&;E]2g(`~<|6R?-]$i87Lxù#rQfmƬmڞcnjk"H5e u,c>t 0y81aT^\;<Ily rh%Rwqi1J)]ݓ6!~WjDQ䩸:l;(y䣙:O>)nicEԀR,_8ɮW&A,Ah+G C"2DRP_db s1ϪJ o\ `8oR2a>\w t2c+ O6UbwP~@H޼|_xssus՛ }8㽐U$x{#D 7?/ZfCSj 6g=] ڜrŸ?!qM"Dk-H5>(dEUNa{z'Vo HDH%DB_H{)!/$w(rB.F 9}!/=hצ[Aj lMB6]Ȧ t!.dӅlgMwR )AI )!%08uBJ`H )!~KD"RCJ`H )!%0$WR )!%0RjiaQAUZs~掱 0R1cǩS:ƙNRLj<{33PFjf!-FX%tJd#Τ!Ln56RҌjb |Y#sNLn=+q<#[Cd7S}X!K'r DH.5Ha~?x<Tn3v Wq|/x<~>XpfX > aF#P*^ۻAӭ X<*wf-NhrO(an?qʵ*B!/CU24:yF| xaÅCJpczyYPx!۞9w.P0)īn&$ Wv+ksэL>-&D!IޟTE :EX%k\BEY ;G{"EQQ ' pzZ9aqW! ,כw昪VeT))>c=|vpvI(g(7Vs)B0c:e2YFb ‚$Qf6vRs3@\չh;4:SNlE2cCze[Fy#; # ޙrlW1Ec[Ô2)#&Ҙ9c5a(ޯ dy}Rf@n hݪ7Amºʨ&b'5ʲATuۧ eQpE4N)ѱHK P(qB*mD!IKqߋBOT#JbN E2Bktz5S +%%\2TqMtR,SPE5damH˅2Z#gw q\޻@f _~#5+O6ecoKEYV8QX7gbD3fY-Ju{r2xdLj7LgS(g~*v*vbW>mO3k&45Bs\#45Bs!LF5Bs\#  -ɱhiU64T-knGA8B mXu +hkΖ;2>n\a0 =6EelydKW_*ZµLDt0U>»-s ny6YjJ L[bMATT&ѸuP9;Et[1Zi2k(gNqa;@tJxyH6G^a/u:f'V!gB+X'HKip ≳eh !RL?\fQ91};zʵbmD .K`%Cj$TKl_5 &,3Zy!VAnŘ~wmrI>lUVUIs{W?09۱8/cK *^KuꎍOCTý~!KW"X|/ wǑTmS)vב?!Wo^.n77n0Q7?]\{ؗIY2F 7"`I?{Cmkkho64U ms=({$5q^W[1(.JH5>Xt,_e.T/GMY(I)5Tj*‰"ؐ@EYiƴ Jұ 3c8!4N2F"ͤ$Qi&ȁH+'2E3MBHJYp!V2=OPm3^t[[ayϋkgkpҨ%̶G-M0mU!~$KLszbbOhB掱,NR1cǩS:NLq'a!^>;m!=g[/%B2ƈJgZTHI3%%_i',IRAAvKx\6aٞh1\G`Udy9tt"tDCB.Tb=B.@J]=5Sczb@+ZOfd>OfyBuSAiˠ:(5{Ev }CЇ1a }CЇ1a }~|Jt{e=Pj~ˣ4:g޾U$v\Zx&e5χūW/E@Mm%|lzz!k[ ~ˋ3X[[??|HcH6܄b!ѡ)ރ?HPږfН1}j5B)rT:'"%YzL lT6܍mO"w. xQ6QzCY|g/<&-KqnwT¯w>xр{ȋ()6ڗiD5]oZr 6JuoDk^6Z% ;^*0a/Mj) j]߅ߍͤ|ͿfKKފ%w7X#C~ }<dDLh,(&x*QAnӈpgP<6?d$wi;S"mWYK`YҠ7(4[".#A7^,xc/GEHHd1~|5Yk3Ur$oHprZ{ݴ ~!9J`C0rF!:F'CM.NZh skaS27ÅϢx̕+~ɟJsgD!*8gS,:nI8`qPI6vo{ 0Mi"R'UDiD!5y82Tr!8᭞66knJjeaC%#g0 +>a5W_ڴDM2uYsYS]S˦ZܜKy>,ġ pu(g>3ʙ̇rvTjGT˙7n_eYȷqC.-AOH'H: $V2bsyJXhnTJٔ&M0_5Єd.Qv**۷ܚfDفkh|>> 4Ipv[> HY6Uql.Va;hTd~"RӈRü!]~?Z!K!T7wk}y-`7Ӯ7[b?ȸ5^1yF96(c-ӣ |Ԩ n+ڂ-,Syٛ>sg<_x*!52:2ql"Ӊf aIT3ƚOKTdls3|JyN2jW2UjU_kVU/N9 ;y"uT#EgԺ1f:E#(Fz)Rq2囇wF\R$ S䜹rVlM:w@? FR툁zBLqM񃋧gt<(r"Ǿz K?ٿѭ/CɹЧ`;^z% ;lK #?}S? ݜ+} 'gI–.3[-Ƚ936VPw{G3}l;O>CDـ5%x]R ؽ"uvh~7i3NJ.+ne}j9o=&9 $\.մHXeyyv{/ߞ$+oKxg2ťNxPX'1%̱8LIp;.PQm-1o]7$+WGao+GZ\V͖ޣs7Mn@eU#rnW+lD0Rt_+>_yVY`>F{b:؀@n_~GDCF]suMX*%ez TO[C Hbn8lc pd[;eGE"+%2X,9 4,QRH$ER#YdC eF !tYbݶžd׉OOftr(nL ~G1=d@@+hMF EVUݔ2$?&5cƘ1롊݁çP${Iu/:6;T~d0ܣNÑ6}\F_szh6O ~Hv.x}s'8hF;];aޮ+kHaj 6Ff:NҬx9Lw︵.2>nlP3=_\Yc~:oBXtv1949R魭oK{ ϧ:C&ہ9C55ڻJU򋬳P}6  <ʃ{n&0-@\Uiwsf^ws;ؾpNS{RqݝڋE@\S!5cuPH[-&^oo>w:1Z(Ӌl.Z&YY";{M%/-<4>@ѝo +}9q%'VROϐh^sM-WOpzVjC6_Ijp4T[&|/{z)wݭWDO'S1KՕ!C%B] ~fy ]5ߙRbğ9+wRaȉG9i^ [lJTW[tA;Q3Ckc3-:SC9ǢrC6YVWbF綂R}4^MML{P6Ya- ڭEFif`?T˨)Ɂ<ߙ(G;/椁鷲G_!YdoV3g7'ˉ ?𳀼oϾ[?p~vMKz'oz> ,o<3h_{%Xp3`^owi9λ|{45ibl9~zvaficU뷫.fÏ4'# ],k淟^֧)e$-gGI9GBt[Çoٞ}?lN$Rˣk+Ɂ*Nnei9jnOl " Wzcׄ:!Ux|B7 uRSFD0⍢+E,$HדOoY?t /fBL;buPW~\gPrw^N7Æ lهu| VoVLP&rJ­\^en1Tpɛ^U-xՏ?b9+oWw&kW['_:@\"|?ULRrmlq1Ob-Xhm5FljD%`) VPLXd{eL7zl<,o~qdc"*ڒwS8S!绤ёo`*ySǁnGfLv0zT*LF}|3q@d`O]?@Ya,K0ck-:kdo'$ף/GEKI:ͳѽF@+e0hwL)J?#k2&nۋ74!B cAX%-7 M|&ׂS"},M1D'F~30N.Mv;8'ۏԳ^G̜[5nZYŔI4hd$N 8qA'ksdq$M , X[s\09[_}; l_~+R*~ "c-r(3`1sJʋ1nQzpïWuU13_O|̒M[78F/,FϾfՍTܭ;Gi0(J}qW)B[Eޅ@L#AM@ w9:!6]f_^ B`m'uK Rb`7C㑂>^$y,g+!JP'۷6TGCk{bр0G}PI#v,V޵#" /[}b`<-^mOd9GsbK%Y-6e]̇$b7٬WŪc'ʽ1~k.}pU;ND:7zmT$+OmZt>Olax^=Pdm=޸nҸ?.>\)-+mQ!<ζUĺ ֺV)ZZPώܸ~ϝA/9d&0(h,226 ?omTDlT3?;^RЛԇ4x)t?ӠrF߯h0ve6EL a0X5Nrb߫}}7y/yũ% O  d[yalͶDZ UZ!qJY@MpL@rh bk ll-y>_&O*7m@&Dゥ[C5h!LWBd&5^GC)$?L9i0 Y#00qւK J:2Ep; F}:q|wZ*/>}یV8b4X8Z#6}{ބ/*wlMx1-ij28(3*2D.2F L2&wNIv)t~Տr(MTv7+(vżـo}s$ MRh\ Kq)4.ƥиBRhݍKq)߸BRh\ Kq)4.ƥиBRh\ Kq)4.ƥиBRh\ Kq)#/ #m-9e¶Lؖ 2a[&l˄mHHK :Ҋ#DVذڌ*@*&l"R%8, R R(^C?B_UHYFp )B Kf) Le @NcBkõTd^>X{PK$^9{fy W)|v-**s ڔ&)N(4Hʃ>M??i/d~{ll}Wh+"9tB Ɓ&6(B"͝0F+@iۿ^1qJ1vgpGmNQϜ{G dMh|O k&N M'~z38n0q~1_wLNi͹ J873_㻻i }v ;'zxJ]Gq\JmFG$#݈ˢoh_Wf~S OtЕsKF0by/E{/xEIϥ$Rs)Iy$ɼ0#j `*$6P2(X!'!N[w1;! [)e4?}t|ctkoi00@E|:8EɿM{v7M|_-NySa*)uh˳XAO}`}J?5'Ps-܌t5.t#q1о"x\S]gPz~5f9*3gt]\Z۪4ޟsrHăΊ8+ůJH7PmyLz6$ϒO\H CT&XǽY/_l]wOl]kC, |ӏ.vTwa~zU3zbet1{Ot(&f8V &cG%fi@dr&dɘ5BT J35\G $"ri5di$IZCJYKKCBt۩Jc:w,ho ):NH v0*/4(Hy55N !T#hK02ix;o[k:r#AuLGgtAk>PEH 񺔏:R jN'lEi\'!M7a;g><ԮZ:z#t'HXu:TGY3?CgR~N**PF{frxr2,hePJtb.8K| wP|&h@;<7zUxkor T(v9o;bd::mXO2OX-c8kaVAgP4(@^kq~ Wy$όg+mHH RكA*TaZyUj&Fz@Zr J%̐$@x-B,Y΅4LuQ,O9,hiM BVP YbBΙI"))5ZpVn+WOZa\u<{]ؒ`En'=?6\|Zn_FY{%p@l.zK龕7wVzlta&g?Pv7]{| EJ$)PDBHf7{Edne|Ejri4Q0$ûh;e<Uxz#`RZhǛ?Mrt:Ss|FLtn;妫?+KKVp2OQ`tY&7')7LC*:eYZ02\( (%A"! xv\RlM@ߑT ptQD [?QqoePo=}# jIӑ7 M0`O0,&=rȂՄ⍅1dZ^ݘzWmlY[jm|w?~qE獞v!%M˝[s7M:"*>#̷8+xryy?<8n3A{Cb&e2<,3LoųN]YNz)b7}+V5EI;q8VrTqVΛF|cװgI"ǷEEke;!n#y]!~p_ }߀ޑhN 譼%ƿ~ _Db ;?%+c|:qjK= -D9R* ',0:J!ia1U:KG~N-ԉ輄0 ̃r^cJl| /Y(gki'#fQ%qAqQLhQ*06RW oᝣS]xyEhr;y`UbW7fǾaYPuUPMfJX3)JEf\BzWW|,Vg E߇Yl:Y:Ҋ#DV兠, qsmF D6PPHbqhbޞbkď,]ʤHq AGʣP*f)!زUq i8ơ&+r~r%QjB2=o23b+3?T\3|4y$ F_QȅkbԵ_{JܔFf!+t&%{G8Kf Cn1_BšNqxO`ZT{"9tk6Wc, uC2ze2{U2u%$+3fG%&(Prߨc|J>4{v  gUe!y3ʎd)K:pWgv&J¿kDWd"׆hPSl@§f5[Efո˹A˥"KKyqk+:n,,8^־l(w](s*nԨooh`VbȀj˩U)\ن$Y)3Px\Қ )AeZSjm)aWwWu* s(s{jC71 QR0H hrSTL ǘQ ڥlM<:/=@{Vw7}LM̢gȀOX࣋ E/ jnΈM`U)ӠM f6=v ,va*n 4 SL4( ZBemҮK&c$c0JUqi$A0f"j>H*)E2! 堅 jXӲIZ&iŎb iU;wDo;?ɾQ}_%?U8>*j^p$g&HreGyN<\1מ}..]8b,l?3 yyQRa]v2,96EYfj8WBhYL}nR"U>ixa-s)1iq@xcxgBȤbj}ԖhiIgSu~BKEʓ[kv?/ v~y7E+}} s[ZVq}r/Z8PR*'DžVr4 9'")eY2uPٲ2o rPѪa*UZKh5a wemH_a|@ +%C^Q溝{Voa݆A}zlj@6'IX"B*C h5G9˜zbݱ3q7r?iFrI Z1^ʃXD8v[l'aK05v l;j.]Y6 VrRƴX@='1l@cs{ .p.mY QYRT#jqm\4=LLM Wm◧@-[E1rQf E4i٥yxW!Yh1D ^n63{i"%[HWf=c'XF ?)"'B J1u'(N (IzV I.:  ޱîI\_` Ύ.$2[ciXewkS*EԿUZ7^? CJPƅJ73OV`Ⱥ{S섁 _oN\SF-ͩjM#Z>P>x{;>8[t3bGR=_ۅ Q6>Ng~PHB7MÐa4mfY>0 V0b^b>f؟lnAG'4j\h>"0ZFK]!,>xzS:tnn)}v R0{3)s]~2CG>W2XͰ?x}TMSN?>\P}x~~~{׿psL??x vx/e B.:hnE$9E0Ju>\",j,Y@k(J`-Xy^x9L]WxbByl_ m:z *9Ρu9:=,xYnE,;9!vyΓ=/tKs͛@4ij$bH;.zCߌ5MM)\-ɗ_\$mR{/4]R}7KY~a>9EFˬ6[ шAnדf_6o>K]:k (8DԣLjns>L`}z=-Y6t\.ARiCr+!N\jeL|5ŠPDw;&2,n9l36jc:aJ"%s'U)1ŔMdn4GN҂NDtz_64/7(/ʋ* oKb@.RtRq]n<trgw{Ok`zjh9WD4.b!bR9c6([8SjK`U`̙ K1t=Eav}run\MnYE+\] bi6}4 j-$* %^Fґ0хTm耕 a:+$,r!cn*Su4H&$ZBXG< M+G#B,u2dL+H!m8XG!1z ",iP΁;KFbu kZvIZ'iE^"yd57v'9g(LZ?ٯ? J*>!zGAQrm4F10]w}n|pǘ=s.ȗqz4`e#[ )/?Ib4.pq`GTbqi 3KV1IrE؄ A2 ǨU1kQJp-[/5^#Z$i-Og|~@$XHK]V~zG)!z]Clom2' Ɔײԣ;ަmQ( +'*1X'Lښ"y9Qd\| ntPl0R")9IEdH0聨4N+!ʓA <+%C^Qf ܷn>D sLŹX"B*C h5G9˜zbݱ3q7r?iFrI Z1 2<Itؒt$u!k ]AjX3Kgƨ D{B`1(Pωq hYr!rYڰ?K!*kUj$^- fIY)! R=X7ܛ}5T55qz 994#o*dU<+-:Tͦpfo?Qľd+*g+0E\HA)EI18< W@>EAB;r5I֓3r[ߝWy牃¨D=X|1 2J׻ wٵ)H٢|moqNO{4_NR2.TyjCߛe' DMz v2ߜݨ)-ͩjM#Z>P>x{;>8[t3bGR=_ۅ Q6>Ng~PHB7MÐa4mfY>0 V0b^b>f؟lnAG'4j\h>"0ZFK]!,>xzS:tnn)}v R0{3)s]~2CG>W2XͰ?x}TMSN?>\P}x~~~{׿psL??x vx/W BGLK#,7H%:"Tq__QjD HFO%5LƣrNz.ex*g'i$O -MYОy8":_ЇT )ļcG=}KPMˇG2|a'[O=Gb䠫Ďa/D .:hnE$9EPJܩν_Bm|W a> hi;/ϋQ>ue=9I+]io#9+mء0XVL0XLwclZܒ\ׯ`*SdSlLw/xױ>F6Y\ }8R#H]?F۳MwdHm:CH eiluμ1#祔Cfί}bzg{n|Zkl&qsyV"_"E IW޻>4ZsXT=F|B8V@c"Z!E K1EO>];mdYXp|yW5BZ<_i |^&P\p4hWBޞur]>luX#S?tQm&m,t>@4 t{)rsrCOL >bT(1* %B"x d9!7y/. nCROSajwjт5Ʋ-}hj.E.OG.k]$sd,y<\Ӭv3 XoX&깼1uL}w>wm}]~Jen=^4.%e;Un^Vǹݬ8)-49t=:}N-3Yxu*4_= >5"ylJs\A{ᒹ ҄|4y{گ{Y|*-mֹ}k43_MT ( EMˉᄷ'I7 OZքoMF|}װgI\<Ǐ9.B,ci|GB\RqrQ[$\ŕ)>%1=)Γ[xe:*zmR!PxB)u0f)'fxey"<~V, ⑫kڂtJqz1b"j= 2Rǥƿ6Fe&vN\L娉cAGGBGh,Br h6@:h3Q#5d(%#V@!!{hHI]bRI$r7zJ BݱHu$H%5sR1c juc.#o6/g٫=Zy*He_ޑɋ<ۦ_H\{#՚!nC:pNz)uIr\-y (`' oƿKH'$5A9}|bER#-CQ$T_шP _!;F7Q!cF\|L1fxQ^,k7^ӹO|75Z(E41y;J <@X JreP.$ceޞAo$~MkjKrǵ$wkM/⽰H0BpX a!8,BpX a!8,h TâBpXK'BpX a!8,rS£V"BpX a!8,BpX a!8,BpX a!8,z+* a!8,tԠ#ʗxfaD $u\M(@{;̺pVPK|eQ\GG×.2З`Rc(ԘG(߰(A>2 }L|~Hjz=O nU],ņu6R_jCWsmy?u FUsa}[ͻ3 jnWY`ͨ .t MfW>D!?{Q>i We;++bdh$cqk$!l{)xʹz ,2kGy'>LhS8p \8 r@.V'Y]$, RqC:_V/wK:P=WX\vYUd<' 0Cac0[)ΰ3pǷ[AX~.Xp?\ s~.ϯ*¨)\ s~.υ=xjF[ ;#nÒH/A x i% k9s"nq?d$sg dN\Un|~1VuY@s4(KK|6"' *d`{+bcETi(p _Gn`k$#eǐzm1}gЦD5^d)A] <@X Jrh_ $N0"= oNIv55 p9|Ӌn'rk-͑q͍WG4q޺OC:EҊI;h ;P,6,KNEysVAWx?6bX*1s@YrJX5܄H,j='ACuY!Bl">b~MTlsuzk}qVޔYH~ye[N t]-):\Tʎu{VN5Z6Q}ƘHPbZS$kNѠd ]*C3;7z6xs5ZhAOAA<hMJEuW跁p~3(fD:Tzsm;_[[/"uz~h?5ի|kɸ:<{"43"FGb}*O>è6ՇjMMj<cmfW̓wlCe+sc Fi8-3}d$F/w;~㮒,%tC1mq>A^̻x4v`qqo/mdW]mJN Cle$,`m<9iƩ99#x~1Svi ~2ˏ`:~7 0PģP+xr`]nԵZmM0S7q0 /H~|矿?#~}/0.3ATW|r ,-_E/y_ECymL=6V7\qM(ojM\%D fXGA}:.|\~U?ZEwMN3kQUih-@+JLSA5uAOsq-Jp#.;%鹭 jvSV#JIj3f3GPq-2Ekcr3'NN546'V xo3Y svwΏ؞r:7zW/`^~?0. } Ĝ$cqk$26$cٻ޶r$W4H!d,&3/ x5%GI߷xtc[rN$a*۲e'9l+[ǛqD9y`#Zf㎬% )xkfe 2hɚ2GfJ3SewyӛCz}šX-Kfm i" Ͽ>@o~>e3LTye8 rtJUdILd: E<'AJeM{%Stv9MSTΐ/x :hyLJ(ŌjM/@XSפm\U>X&,Hg*W9ȌԮ7,FP$ž*!8}>;#=ƨI34,%<#G -CN{MUώq҄5N$0苚1JBI*Q0\fXJ}& rӸ;G5n.rϰ߀g0$w+H*,nzw ."v 2>V_Ok", Qv[ePtBk@I6Z 4{ە4-R*`Jd )M6f22d `-! F&s94 }H4Z"}ĭՏL{C?rnt6ZMzo9\[LB^*ףy:krF+1_yE2u`l 9F0EtM& µL\_hG"v8]Ⱦog\M-o?\+Vo-{$'Pxo@K9]iyo@&+d 09Yy)\9~JV"]x=]iY#`r-wt@(cMD Apnԕ dC`Rfk(;)ERN<7+H1Hpv5֝4҉yzyOYރS%y{㷆3odr-$4 28$s2J BvrRe*&ц\X[mDxd&r-[ePV{%b}mGo4!+u? jZ;蠜SȡBt1`U-yce`(4eh EYe 1V&2i֪I !8ō o6!l#%bY3V91%DxI%"* ƺC)Š _DXAJ:aa`זCy#F賓B >=ɔ[(oBC[+j]2'd.3K M DZ 7 z U g4مBXU萅)jO[Mv.[*rU l=d/,WJF5ɔ(t\µp@+c8S2g%ytubv)3+v ӄ E~ kuN`+֜"l3u>;t-WXr\a-WXr\a-ҺeAj/K%z{I^Ro/?5;BLnlf{{ms_F(7j C(5>&e4x\X-P&йlx<.T>ȻN;Űyډî[Q΃WЕClRf^0_v׾Zr=oukk U'KsqcPAKGBsM6՝g=9.ch嫵7Xklڢ`W&ϲ%OEmفQ5([r[FyFKipgHj!n%OꂾTyE|\[4x3[G{Fp1p6ttsxQB&%VY Cܦ=pY2 yo9@"ɼdrtVymзZgIo˰eHU%pBeϣvRS FclT+H*GDrAZ! L֋RĜ5qelt(]/l a~luG'Rr^w4nu$* ea. &R hhS4=CV2 Vq@6SƒL)&H>p>fY LsMH X+c=!A'۬ рE|!餹 C 8+Hg84!ck\̭J}JQK=EI\]Bt^VQ8;>Yl?so-)5~(ziA1mcEI9QW>sY%>6ֿ{=_Iꪯ~q||2u٩ijL5d "Ų)si ,c]s:o9n#d >y-W3]{+JɔFC3 JL4ª|Юf!S,2Qez:m :GR N%s@0 '=т DHHuzumq:LRH]?X82hLu(A'RQXb>0^?n-ã&695YdP *$2H `sQ ͮ/S ީV#5W.Ryc'a8 K2*pGҀ6m.|,^VG.w~Jο'+[yJο5:]R!x!):-<ΙGzSg* 'Rj1q^x$wkd}h[ G2cc\2qrN>-%čFƤORVgQN*\Hc;+ E+[7 Jq>|Hǎҝ~L_;V"u6Bϟ/N?0rS$TchNDL,0}}⬜/`TseLҝIb_kW?&O~M?x VtԼ3K{gv1KNNr\)7^ֵږ@n|]3b}3Vmfy%?`Ų-wǛgNͭucEbrf2RV|6R(uS)?berQi aj|~|MgQ?nن"N}Cs4CϮCս^MCմR ۟< _HIoʿo_/囷?}/^)*u]z.c,wF?mjڛ5-vh&G=jƚvv]D!XAhG*u|oImbhvg0A?# WB<*NkEVQ(@A<-9t̷FC{fڥ!zV\H-,zh5 'X"QGl9YP唅4r3JˋCAؚ in5>G;O#bBWqU8KĨ~BS 𢗛 #F}EH+K X&+`nc?P9OуBEAƠ,cКTPsThK385^>Py@C?4TNP!W *ws7 [LR6\XG!rIQS*D&;Op'mHs qhd-j&QWنW{ҕMrӧL=)/kYWW қN2ao@=έ枩?} `8C eup* ˀZõmCM2D)Ũm,NRM^*{E`!OO{#}܊Z)$i*KMQYmvK]xXtrcF1GՅ+@hsե֏8:nߐ [_JWқ쏶T:/FdoͺB*YQ}n5/ oh DwyP쬯ot.k~'+9|K41\l.zC|iΆ?+k8jLũI6MңԖ[k?mN 0@+Iʒy>B%+`xG(3E_K>J+,D_iyU*T@'KȼqH.ILw&0R;,o ur?, g~.KZw\6IU,,!c#MNk~s:{++mH@_f>/Ixa)1H|d}(#r(Qc:~U]UozvٴOq7P [U"pR'c\1Ƅ5d)\jN{7*|w{ǝe)9ȹHJF)xc,;9/0?aurG.^%Fl$vGwV{UUĖUmfw-ffVB9izo0 ^Fl©EUnG3V<:l k65(?4wP}sMbK{uR'3ҫu4U@)-WvTC!rjB|fzpЀ|5aՀ 3=, u@v?q>zЉ*X\ǵӝTV$B;+._Ji 7#R-"-|-`Ȇ\w^?&ڪ -}Ac%W˭(z_w&>VL)zΗbww; aAjQlZҸR]%4"z?|NdtԄ5:hH2%23| ׫ete NDfUXFWȷ ܩ p07" 1lPGAKw9_uX1"Z}Wl'Ué5?+^&"rvhyECVj.lOZo] =E[[Y37dVr֚6FZ}ݨvTju۸T^C "7xnAkUdH۪=msQX=m3Ic||̾#dI5g2=D?QbJE*![:*De@^+G ii:|C}gYW"h}ϳ>W;_̊E(z.bl'y-RK(ehx/="KYv!ă2L,h5T"=&b s~= 1'E~й<>c'?3m5ve>T.I屇(0sVnRnM\@D 9jzL(q&e(%O Y@gMM6, z˟EKu+zd'6mQfQO+T/\ D2[lsn%t\<`BBg,/6L+XjdSiR2M+KVNQ>4*ϗLʡ),8qБ&=S1˟gs:*yOփ(SO\UŠ=z4 " SNDO*fzfWSuAۯ=KUyE76\]gy=qq4z͋((aY2O ҡ,I¿ā9k|z>5,~ "~LrT4zB7ơ1*qWBDSM.thrџ;R :#yu|4?>qvuݮhG<ҥIKRZR{ik!GxqdJ31~"EzkZ͘oۑ9!:I~/N$P9^}O #HkԨyXO_vE3K;t^2z΄S=O:u@3szA{ 1V'@4E } 4DGHSJZ<rqGo #JpQ@KPZNv?IyGYbTZN\h(J?p.>P>>`]F}\p@Xύ$!\3p8A<#1sNLIƦ9ܷ)ܧ \څW>u>Gr[n۰ czzSgd%Nd$^XS2+O]jGdkN=ُE~=[ t oC(rR@I)|sxvwU<QT5 kcR0Bd}6dР M JcөF.9$I\RFʓ rr`ԍLDSM RR!%7H M|CrBW!ԉ3' STEǜf*W'8t5*Y &ΈtXZ*(_{F$c@%ʔ3Iϫ[%MPa*=%$4Xe$P$Bx+c$c&Oq"ǐH]]ᜓ;ۂǯx;+CV G rIM9Yiiﮁ``LxD̖_шО%[.h q5 BdA|RB=htu4!T%Nd->4X>I*Y=f_>}C4ۓbm #K\@w)_%߃L|<7S]BW)u.IڀF| (Qr%'8{WG܃X5Wez[X?D=E(TdI-Bf08|}桞y{œhD)Z"r;Z,yS H<#9+ZAjVie鼍\Ҟb1"3䔰g `BDEi8,YrY1G52ߟ?Gnn΁8$'z2ʐ pI\e2iRt9-򶸰;1xV~7 bs[taƆḇa>T#S>B9aW\I)v8xx 8vM9n-I'x?OHtF:^qt5t+ҭ?|{U7QCaB(T'חWU c_)r;Kp (bl ډ 4qaFE}qkM㧋9Ĉ?Vy`d2ZN~Rs~1-73^&+ߔ{I#X>aa _9a'1tht`EZ L0\ԳΈE^ 01rX:Imh?1pwgHٍ$hgr3Dt9q-2EiMRLqhu99;3-̠;3-hؘۨ-zZqWK >#tyIiu"_K0d&$(H9w%X#KcC*sM)`l3-"M~Ns|׵[$&'&0jʘ\-n8`o.*>_^qe-B9.ۿD _[h\s[r_\7wtmgO6vbl_s`mo TƶEZsBZ*oHu3/tHE9t( HTDAi3Ē-S6^{啿~n2!*`)sd4 ${mw]/51&L|"x+aJs4OMF'7H \{` Qp.څh*_-eF2 ,Z#][o9+z[K`dgw'Hik-KIN[l],jI[t$YU_?zk~֝=cu3};%doN<v=ݗ.@fh܈^ʬx +IMK=B9A"4"*gdL!Sdd쎫\Γ89˝{&x(glN#Zzo;{[-q;h4M7+_{l)jr_yG,yy}/qGqPv.F8DsOXY@󸲀{cR i<(L->C1>,'>^mOyYxj7MXB&%VYXf.% eCp q3LJ&h.SA YBMYG!Uӡ\_關~p9H` dQH@)$ /̝̀ڈv]8QNsMB]L*[I?+^~²%c@rcTZCŋ7زZwvhב;.vwXܝʉ3OD6ND<;7꓋ GfhmRuNjrlGM%&8,vC:]jnkk2df'LR" V. m.G$j\QYb;S=*cޖYC:&n['V۠?v:`ʲ2}E/Hm E 3Sf6t}G6>8&cK(!aĢIب֦Y!ɓ {dw$crJ:]]ֆK4 r!ZH:ind1`7ڐs'iuvHI{v(5;u#4M׵+AOuQo^H;6;`99Tʆ:זRxr張SUHW+*EG.M=)jAX@Dm}6>lK+12|`G2$]vGn(P!SM6`)] 4d7}x6o.\\#4:::RlbV) Dj:EZ4TY$QdX907Io?yY F_إ'!ғ(l"#k0:Қ y0HEc2hGxceU.J_TNPdU+UAGKȿ&S֛ΐIQΦ(UD֖iT$CD23W%Zpk%FfpV;i`w[6i; ƕ>KYWkOͧma;w$g%;Dܿ__^0 : LaVp݉.ة٪dRG.:f#D@y6𠸈,1SRlu_`Gܝ'\~Z>ķ5UϭU֮MF*՗\U7UZ䌺Ac.y'JWT)]Jos8Q=A?lͺB*YQ9{jM ﮼h DyPx5׷xA>k~yk9|G$TizŤ˥nvE/Iƃx%a; :89`TIF_+|NW#~73SnM}RUҨA[czyLRh< H_}TEQ b9xzaЛzȏM69-yC yIoraUk`\7Z]Eʡd*=EYetrsr d#"r,wo;4F E$V^N>Q(ڐb;y( HOy_'EPDJ6D lkx_EfI'#5$sdv gIh\Y:xe u2h-%jSSO2m~lڶL =9;%- ib8-dۘeXҌN^!*m\G]%2 +IqKCRӹ>mj:+"?;k>,0j!ɬ@F LK=b$ Nhb;4WaVhd!9FG퍓p<"gS)i:@|B[ug:rhЏw72uʝ6#%'6;-GNˣeՋ`6YQK FF/h1$}PK6FCN)c#oUVQ*K>%.C I"oɓcChYVZΞ$E !h (><̑ndH&SpIq:$"} &fI!IPZ&HXV1#iC6ȹIA+L :ZN$Y!7>8SNϖzaTa)"8+t\uh8X'g8S2g%Չ!Y)ed'c'$cKC~ǐ: m 5 G *,?XrC Ce$/ 4gO)2qt½Ȝ h}t,gÌqGM\zfձA2~s液K:q\{hEbn}H/|]z2mWN"KÀ% q3!{2`훀JDkںUݝ"svJ x棩jck`k(v ohk@@K,akKR&HV[]e~wJ<+K!̬Z -xP,˜2v0dDm\q\]|u;S4-'mϥ LCɹkr Q{m@ǜOY"+*so+ԇ]:k9W[Nz [*:k6ϓ $-UhaW:TysEG݌3}?OJhf0[U(N m^)N*X1@,1v(hyc42k bE@H$9BY>DX^uԮ =TW>1RsF!dH¡99a%Jܓ4XS]BumJ*xUeNn|L42T~HK3zSUYp5$왔^nom(d, qK@ LQ}K&nمRrdl2%x}?[o5KŠ'񋌥@k26m/w4o/}}D"=O3ҝ_ލH]~m^6B>OoW?0%(mK᳸W)jrQI81q&[d>=*c`>4/ݛ/!FcOWنX jٙk?̃lnە Qo8z;Ho9430Ԓv-uS3'Y],Hh1 G~,Xh8[Lumpypmou1ȦVjbc~ZHiXhKշ^QF JkߏKETW+ޯ_oq䵎0.!?_PYC+=J <Ϋ{ǽRۆis6p7$$"%履O?…>~OdDeb Z xQVT;}u-4hZZأi'|wks a l7^cB\6b&K=Ҍ L b?P_/GhiRi2MދA<+3zU֊,PŠGQ8yȌFKIIzhBTqO# 7y0F gc,( []9,*@1Xr"H6uZ91g1=Hqb~˃븪/d; !Mq?N3fEVщ6L2h0.JJ*NWc Ig0 [v3{8C3,ғ9[wCw-掠Gc.1[ ɀbp5(1BV#oP'm2kK e,)W_o)vW(޷4M\1"`YmZ wW?{7^ G߆ }S\a{Z\Au9`Pk]?6:Ңڄ#-*/MvR+ UUYPy,tu3>珞Oe4kTY&/(`=lc4J /73Gk!1[N6ҬR\H|'dY`q.PQwژE ӄugwD4tO\ %ļI=;~;yO8+ 3-| >:y$ɍuy%ѧѰ׃om^f T9ir^ʇ+ <4K~&RQ[+JQ6q]&(}̒ZB:8G V(i#C(6`Y+m,WE$\[p1Oy{xwլ cMYfYɲcZ}3)'OE4_~B_gm , Epfr7w xj^a~fnj nTNXO_bG-h14a (]g4Lnar?}e~d?d$ br{Z/idMzfꀲooٹ:Mmڪ &^~}?__,"..#n3=^^f_je"׆hPSDdO=SۢF+Jc DZ[[)JX^X"DVwbƞ-;ةxm 2Y uJ.%%37x,">L$3.q/^iU+ͷY_x^ nk61{a3Ȳ7YȀKb"rm@5h4lK@%S(ن$dXJXHQn4R&TvUkvMӲ:;-zuLOSUm#4K:siw֧ߕX?urnKMuIh>]y1C5 RF1CPF&)e!` ଗH\ܪS \ Q#F:T(qzR0<M.LS1&e"ܻwjN؍(({aֽT M >m~[KzDX g_ n8 fcW 66bȩhB+[gtjl`XC4hS&{l9k Ր,v!$ "1Ѡx&(D d,# hA%\ {l2dɘ5Ғ>$A0"H>` Ʉ@N3!c=MҚ5IۧA(JM>BChn-j 1N8ᴣ캣EP\w:V 0\T}s3-JD"L_*-q>yrb[,ζp/W,dQJ2Aq^'NG@ Jl̖dm&INdQ?.TUj9Tԝ\Ír n/_od_ gp>٘Ltw8-/"@zu*O8^#P~ט|ѻXAEh:FO8URhPE+Au-)ۑݼ{!ϪJZ v{^zȗi> t&4]U96HV#d]^s Z"O' F-q$ H0%,YH:n>G!R뮝 'ǻQ RR ʔe{TuôSоhyB92^ J%,$@x-B,YN#4L.%x>:]:-IAJS8˒6BL(93 YD-)5ZpV^[:g+~׷O챧=yTrQA#A=f=rZUp:ī 0鈃#zM3%Cp-GKC{ih]62p<}vZxbT6Xŭ u Ɍg^2hSSz$Gnw)h@aq:pfT»W]/ 6+9غV R뵙}f ?IKw/ɢq8G->|>GK,"^}Qc/KPZdov]N'⹬|KDlۖۓno HYŢ<9h"0\DG(t[͏d-㯣5iv5,u#͌&#II ўY,Rק|_wq ̇A C2_^ مxX>g:#`V~K~Tnww~N7_gM]-z0W6Uw*Ԫ,NZ\,#eMnܜ0 Y#idSkplZp$x6sא<EyGTPD Klwx_S=}%lW&#nd6ly<|fY7 푫# V ţcċT1,]W STF}J wod%m[c+2, |h)r\m{[\ YDߛ"" *&,`+dž0^}]:sYpMZFBzhb&*2qeTlLE=qt*{ԅJkʺ4:ik9r+3yZ9P> o%I }Ec 29^jI@ Z`iQT#)x yWm9rsv L4Utƹ_Y6^u,d2)^j-Sl#`*vF%M Gs`zfUl Oz#syjWrfA"6&̓ygu4Lr`"1Y )B Kf)}k30!+8 3k {JSkjڞBD㭻] {>=fTگ+}}BmW}$]o AnwnTfO..i8 cie"8k4}2ݖcRײmum%/Y     M k$FoHM0`6~#7h$Fo$Fo\#7~#7~#7~#7~CQHHHHߘhV(*3[gfՙ~NH!9~C!YXK9OwxlL,1 60s!25ʁSVrN0X7G7b"-f  3r=@J@ Uh'@&C$>!Z {id҂QߓK*9\w, i{l4x)v ݧp\68F9+OO w~?Ʃ_ns\gtԼtAҌǻαԚj1Q[Ԛ}6fxVp37dx{y5"Fz@ZreL%s##A&%^zKr{LTS.EWUt8%ZZ!:2T!p%mPsf,eJ k96Nջ>nb*~׷Oe jmqkB gB2㙗aT\JJfnٔ&| {F epVz=WU5g읋T*ͧ3udw= l5k='wp w|K:pfT»W=̻:ըwH5rcWJ vymf'6?XKi(]1n4|:n5?WdUZa:oov{aͣ}?e[*.݌Gk=kbp,UWE7W- eKR=hLB|?/u=Ԝ4%p;}rkqls((nmg.&vonm7ЇKN a<"uiԚ4 8k& qwјDshԖhiʲR9;' $)-r~1rJ-[PmBsUpr1QH Mj C+kb*$@Lv1<^)v4:;-OevʭtZ|\u GiS1` e[ULkf`,׎"ZcT%7EnjJ41Lh2vB26MdMnC.^e*N Π<'*)ǥ{2)>%_~)Kyb2 bk-YMG>,hP<"Bdf@|24u wmH_i䗽M|ɢ`1dgY,6,p 3ƖFyC\dlZ)oYZs[Oc -iYkȮE*&[8\@8TmsF+lg͋%3K>7yhf@&fh8>iL-W UT0{%2D\ @3Ә;xN?}gK֐{׺Ee ޠs|+MfG0c4wKl.tQH{>.G^ D%-hɒp n"kzI3X;9m1yIbaF=fMYQHR( ܢ`F~UK\Pd(wod-COrcTl 7aZ,;qWUx]mΟ>к㣇g Jڐk`{ТѤ6A I$ m`XC,'lyXߒMԇe r @.S? Z$A@BHYEȃJƹ>$e*cߐY4\F"AF& qF:!$>dl͆@*iUSҞ݃P$nuY5S2@&&9cO0hާ$E-G698Q [5bnyK!9Iw,#.;xV)2C!4[DpX9e2;m37Ec;Z0H S hOYHYtNNH#krNFSҟҤgo*|%~y@[!ʼrVmsмusFCjf`9$pOJMLxώGľq$G}K&N{28^:,9 ~6_`,yc"`Z]]/ϩߎ»we;E:$sćtޜNtOt|iN ONN/j Jx6zI1q1e|Œ*m`64{>^B czƏgمˆي ZT}>ӾoD`8lq¿4x+I۵%Z]ɺbb\[dyMB`'ҋe=(hr] v}F+u][(vqaHs)+>cg sS>p⥝{o>8/Sy^b/a ]2%H%@N4GHZW~]+u]S5ߧM<Lo7O$߽zW?/o~~w/Ż?Hh\&)R Sv*o_oWT޼hAVY\f&)WEbQn8 AlČzd}m@u!ńS"_\]&nw[qp%UϣѩVeB >jDȂQ()IKu:0.m'>_v1g F2ha[FY89q&KѹYNYx uSMqpRlm`_4z~ٮKt7$MѫBl.,ҭsVNŷ$"v=lĽsG&yK>[F0 dђ7ePr2l .*+cruup%h0<;){TPw4z,hE3@򷩓;-H߽Я>0~hJWv+rm{A7&k8>Wb~-kqVzi*b[-G{`<%.1bMC`Rq)Af-M&U"he1t5 %Q?!;D{".Xt_(tTLT%0 e@##cJЦ4kR*u)7X Y X[D Yxo00:7GHC4.1li]Bk6L[m~S8:y:;ֻ|"M멥QMG21мZi,6\. i89ԃiBj4lw)l! ;b~2?NɦyЈBCT*l9&Nf)HO-*"2B?bFAba .Ǡ8\ѐ+tj@$2qLef{6?{Ά#B{:~>%sr>*uƶ3`VХغ%R3R$i}"w2 rPX,25k/d*2I{`gtXt c^ʛ//lց?<{C==_ %9ty.;wykd98hRӿ3P3m GB\6Aܲz^t!G{Ԑ$$Q:qZh- ]zsYXB נ.Lvzht>O =94]uNډ)t-hZmA`I1Įn=k.攕 $cE ,-rC{+MXHOv̞ba`B&a!#U#]$20c ԕ6'AJ~oXSiL갇b4A5w^Yxr=X&UpYN»Um=&T%5+ F - s< 9199plY9 ; s%$9@[-Md*%d Uf^_B d`4y_ $);H&-vk`̒ndb%'&o̠H g3zkU\r%$F&1P< <0`e+RtYEnБjWAO&+R6hq5ΔY DGXU 8 OQBUdN)eLY Á6 5Kn^Wۏ/)+YJN^ j΄ߒUn'jOT=z=]̓gÌr[u@$cH2נ71IuŀTh$J92x Ev]{OlXN w]i7LC'-_=|*Mi!*z%م{w:EdNPV򦼍EƈD?-~x>f/_;=z}+㖹r7kI=>yL VY Cܦ=pcp[,Fk2LtVymзZ-_Ka%Đ:\HÜDۜpKwmm86/[yff7 ,Y ^m˒G-'-Z|iI[0@XMݬWb}Y(z SB`g<0L ͈Ťb2,"E/h_۾Gp7qF̓=u=ýYE鴡K4᩸>S͓'zX[}zxrn<^7;a[Osrv:ouv^SM(9{k*oZ\ ZD>Qo`^ 'NHL7/J-,.24=kAd 6Y$R*&{ӳ-colڸ63j u϶P[S[(mv#E(su͠AeGѰqoD#FqtBCY0+%X`@55Im9g#9lr'!(4!*5!$&D}%f&\mvLURYJAWm4<"Xa!/YdKt1W #օ$Fem1{W*:Vt춎M 娩8E7AQ5D%UI% BdYA=J5Ho|V6l*ͽkZHM|x6 _/J޸8t't=TkK4 !A*ᡚM/G}$@+"P%:gD`;S@/aT;h)]{OwPϯfXJ ✡%BQOFqڤQopɃ0i8=`*"*RHxb*'A%r9uTok)3׀lsJJM$b*khu:%* *7F+BYQ?nuI`\^?r*pzu -Vi5B_0 >yydbPa+ IÕޡгIbr* AoRM9cs>Վ:A'h*={QEB@oM9j.nW/Fȵf 빮P Sx;غsjz}8Nmnl:Ė\$go{ƛ;V=-z^i~>*x-wnƼ͜0-l;q[l•Q!]NM%~E.M-b8Rp/V|sYNWm4yG0O@q0ZU2HZ9jHԿSO]2ңwf:Fh& jMNOASQ{/yh}0uPЛ|ECΘ^ÿy_b7.h~:-ڭÆ`qݽkv~?*uwfŖ)zszC$.}b"]jR T%Τ Q 9K |Q 9F϶NNswY YpkD9`͠_:3R-*E/G/]4,Ysydz.ڡW{V/7텩cJoztPZRCYBaNwڜb K-v2 2BuO~jkLl"dۭ{ H`-wpmcEU*meg&l1fifqḍ[t/TZY}'XCӰv@yS(.;Sޞ'=F94]Ҳ m+4o2!*) $ _/Ygٛ];ڽJ7)4םי~_:I1Fx:E!T'ZZFb<<⽹6]\.Xn'V/CޞWJ__6Q^_HjCd^i4'4j8—BR_ KAGst$8— K!|)/—L— K!|)/—B^_J[!|)/—BR_ K!|)/—4x—BR_ qE!(/M)V_ K)X_ KQ̢(/—b K!|)/—BR_ */—BR_ K!|)/%~j ,C9o<1rc^!)qJJ%ЀH|DhloQY,y+/DhyxUqvO 3e]+;ּK\cIIO]SS~ib<0$7LE,Y҄2Q1o8 +!")&qvR+>BPƹIB?aP IQ6%¿LIEx;؛8]+tH@M? ui=#=Q</裴X"*B~ܞvmVVR|Xo3V,b< D?3ow0MN4|(x5kN~ǭCwR+IT UAPiGdW܉pq=*;GԹ|:gݎ8,O@$>xGBPWJ7V @UBYO (y *ÕAǏ@I 1jM8  Z dg(ۛ8]B\NFC>z_> $C#  /YU{kKˮftEpѱ#hOE' Nhu"@vr}H]N)iB\h&6F 5\&ଶ5!2*NР M=JolY>9$M߹$Dm'o41h 33QPҩK RR!%7h M  B m1c G@S;OuRs,Nq1U6M"Pe谤|^5ǾFc@%3Iϩ@k S ZDӚK$*B"^ttl}.o]oNhg m`P ' )|w׷^V_z$G*M Z^9l%)a (i /,^z/X^'F7ġgCpP z-hB *Pi,Nz9p6q!?5 QqL /N ]QxU%QH6P+D 6>,ۓti3VM^@wm 6'wBſٓ#@tzy.\LhвP-+(EH(uЛ|;MnXݺ4kuyǡ!+_%ǫ}E5*4 FABh$Wq)xOVD'Tٸoz9K|eKt+\|=SZᓆL rqŌHyiS4J"X ^ }܍AeЯ )t{gh/TſZhOY~zMJLJПN YRy1AOk;>׃KE~\Xwnj?U ԙo o8yg| cc>nm=7!F :jv~a= \oq Oϖg$'_~|8麓4w.núocxmV''TǓ>O1z{/u3݃cd]kXfuDž >L}IdqL`2xS~i~in9bydз]4RTbP%χ8/ Oh$O~ǟOO> e_>|?{EKQR>[_% nlk }ﷸ5װŭ s|_IHC_WkƖ1!}1Ҏtf*!M]1Ncjh/G94y+Nc4O%` g9@?%`b%8I/mXXQ?1p F JIf3g3Dt9q-2EiMRLq(:lT+\So|8q3A۸/'I 733'a22B?W8"zEy| qssXv681jUkiȂ/fn/yUnO>U76ď~V\N~B^#xyzڨJl/.it%'E10c4QC_/O7?˧qV} kۍ$G" tNୁ,փ],a>Wѥ/6>*I%mTM̃z"sH)ɇ\v:T`r^-qcM8vQztǜo''_O;w!T9'e!e}6eʉSu&{w["0$ _Yp H_yH=>D ٹi ٛ@yѯo8׮j/X{WI lNkPqu_wzi3~Y};Ρ.Ż;ȏϜ0{Oeg=_@;eGw?w>\*#쵓UkoOܿH><]k;muZ$ U'&N\XOj<ŢR[x1c幚p])8],zE4ó^Sld) KfmPsUW6[AET vpt/R|[Nv_2}u1n@L'sY3+yS=*&$}`d跢u mEw]r3‡}ϯtB\Er; fjY㿍͡vkWۗ@ϬvN8V:g{# jM7~x ^c1|- m~kivz6xm9閅G- #݌y.}`/=bXtyͯ+}+ͷsRoWc3Fκ?Dj|,)jV*J=HF&qA$qlg~slm{L^tf74Pna..`T 2 e5sqH;"IepJD-R{%Xf8w9Zb,]!}C(AH,L)mCl2J0KVdc&{]l&땐j-{'rlzZkK C.:KxJsHVI5PcFe:[_ )GEV-&KMHJ^ɡ9 "{t hLf<ƪf`̪I՚#KNDq06[^{ wx3gQ_nYf &2fiqaC%ĊM*; JX[mY+f}nq.Y)ZWܳI+Jmᜐz/D O0Qo+GprvR~YTIm{#-(c>BۄL)6 hl* Q9εD$E%pj \2%drV?Fs_D0yk+(MRw 8u .`#Hcf>2~2w1`Dlp%+y`uaEYE EU t"L4К5'abl+]GY ‚52)fB޲jPRΙm-=Xh,g修tfA^NwykBEe 0L!K-hL\691^yt7O^ӼM۪Ŗ˘k3 )3!k!njEHDv<J[ qa->|w)jY9J3)VV>0:US"1O#kH"]r5WjK})"aTqZi#bw`9 02Sa 5!Vļ)#"\`p/rOtԓbxm=l`3$+[0x?p4_6j!ZXy^.u _v)&d(,,MjSE`$1rŢISlI҈56X~n+ED8#2Z]+]j9GWgqن$ TD4[&2O l'nT rzo5Q@B<2DŽ`AB9V-e4'MȊz}^L$6ֽyCj* 氒3bsyO%AR %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P %P}cJ yOUE;Z ࿺a[JU=Z#Q= ҋP(P=K0ӽAHiuknt?vhIs<:!#Y&zgP:ZR>SKJ;ZR~{E,*qNi]rϬ"=`+2lk,NJ*G5(亶A:f-)-R[%5oH dڈf'\v{WʋTm}AS8?]]y_婲to\ g~H-\SZ%&niLJBg65x%zxu͜gV 㓣UW;FG[:.p-btm!L[YcB6AԨ11jL|zx՘x1z5&㼺04Ĩ11jLĨ11jLĨ11jLĨ11jLĨ11jLĨ11jLĨ11jLĨ11jLĨ11jLĨ11jL/Ƅ|JiVƄRƪScBWScBc_cVj5&ndwC 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@C 4@߯ פ&_^>wJ Xˇ;6?^f՟)sƦN'CYm[ǺX3,qh2;.Nw>bc ~_'@*3y݂hͱ}$:[@6}ȝΡjM[g麌1&Okܣgc[~>ۍ8?k8o<]˙m8<8^O=Olr.H>DβFWI',笈3)ʪ*S²mUq:cn=E;{j8i=zqbg+}9?)Ak+t^=zgr#w㣽PEopɳS_~ojDµٖ޾Ҩ|w4lÕYqd.b^2juY,C[_Üv^63X`W[kYHr?nIe$˔Ԋ N"U"ߧxJ"ɯKUV˘#XyTأz|[RENJVk9)V4h155JL\WX1EkySzO=K:]fJB;WݝrXv%yJG˭\-)T` lq-.[\` lq-.[\` lq-.[\` lq-.[\` lq-.[\` lq-.◩aLs~T0ЕyBo6y.ksqI(v9( . Үve䤠,Ge ȏl sv_0^ e[yYSvkME Q3`c[.p{w\CFXrlV BN(E8AXyU&xxB;4$B!BEQf&Ξk\D sLŹX"B*C h5G9\ޙRqql?Ҵ"I圓4 b ŕ8p$GڻMXPE*! ֙%e!1jўh%7$eL8 h7g/[ve7~}HF|trpagP[KBښ!hdJh\'4i>9̶uNw.:j[_詯t :`~u½qsbRTiRF/KIpԮ.>ْ&?/~ ?{KUBYIz^4ߴbMShZӜvB9_Ue&D f#A} um{?$b>/GϽץ=w$YbRNU4zӦ5(RE8Qz(1p=IttImXXVtqǡpF"äD'bM x+at1R(H,eP'ˡ4rM"ۜ8yIto~ܚw*gI$"I/-^ MuU߽&9UR[1EAZ>k- vZ fHgg^q"-uH~ΔCF1T)b+t@@dCJ5Y@SJx єGXe_}͑yc`*7y˔M=%Ǻ6ކ^:7ri+r=6 rm }۫oۨţIoW>^@]Y#t( S Phh6gU:?lvTc3evGUx= U݆:ᦺ7)n^uVM@ b I[@*(]AtՔ^= hevhB J.2v5>al{}3Ϝߋz+Qahkޒ$/Z(S=/sRݖij3֑[JGЙJb%+, &XH]vIn3ү :Z&ָ~2Xًӗ@&dv{#mo~, r=IJ3B0#>HԔ1тFQ4`pHF.|1} j.~(-$||k`(EYeEotRod =!"}X{))Jnhp$#9&-EAoIo0wX&TZvnּ-}mk.^w Y(s-.ڢW3+VQ+Vw\~0L}WNZJw %5<\/d6K:=/ x0(VKVnm XbK/-eFtzdR f͑E/G;eΜv.&duby#4^[\ ^9n{L.+2~yVʴ&<yt*^=wJ+u"wRC-8o_xEMˉ7Iw݆5q,)o96\&(CR"`\kJ=3fZ zh엘"D]67]pyK@xm꼭trZ_^T3KV8sjxa& F3$C}`!T`P}URxdgϲϲם˲;q^`pBTNjF1R),7Qc 銸CNyD^ +Tpc*h%{O:aFhNPNl6qFr<%e +Y#Xy=jn\8ݶ{jב XnpZv5pbz.BYtA(sI`.e.+%i:HWc2nPFpPqek:*ίg1ˍ4<0+mCo+gI1)A[XnViM{? Nx,1FP&J |BNpn4R#@L#"f,F/sfq, RB,[eA'OX4 1#׉v/rHvt4c ́߀5?ߜsU0`EmI WZrV9O0ËcP_._BomvuG /ݯe)-.ՙ\*5e?֗¤@]xv%IY BoaXDHQՌRe,:uuou7 Ŋ9+˭2 ĺx"<].PH@Ș8P=t:FY^1HJKX5XiA[ߛW-h3==J`:ՠtDL9XB giM+3q G&_%7iQh 5T(O+L-~:U:u:Y#j"qoAr$Nk2R䃓 Ӡ;»w} u)Aڥߖgr}$G^-[>7g8,7"*WcV)DS1jyؒz̛Xtf"fLgA.bVRQ;TNp|BpTx3KXfuChㅑJ vCP q`8-RLp.]6qvtn0MKx8p"a#$8詇М$.5-Ѳb6]PpA!x !hg,cl4T9#f2l NIpV]A:Rb&y$5X&ves} b6^jFe֕lI钦w)6τ FgK@I 6"%LXaiƱH&42CjaB9P I^P.,ˉMctW+0R,pc 9iyOۃ"DhTHCKK)FD_A72xin2N#KBkATL}b)+BI^m77IR=&os7ϙGwɄE}aLWoɎgT2{H'IW6;˜jg ;/n&i"`KTp-anŧbUnݲI%{R[*^XV#@DDwnaAe wxIx V'Ha.4J-}V㗔*,Kxu@+?tM.AIP^&P/]C)aW \)_ Jjv *sFv* f-A ä\||%Hz +YMi1e)kj+ G$bv^1Crr;KZ)O[Cmr,X.r\Z.K+JR.Z#t.\Z)Vʥri\Z)V"6" \$TVIUkZ%VIUk|TEUNzW"anJ"JenG)ǟz)ǩ)r|; 6DvI5 ce8TzLG eZ30DMFSѳ_7p6XCrʧMn*c`G~Ѡ W,nRaikWWo_ͪChYB˲ͻ]wz^4~&ToZLJxzi1Mp4Ҷ.K>б7r<dwjҬפf;)k;ҥ4(6{d2~W]u4e-ϤkK5I_̭R/ @KߚIPo|3fȆ[OҥmrckDaDN" EhܰyQWkrSٝKM8#m1M?Z] [,y"0bQG"`"RSFDD b FрG!eLDq][8/8 o]og8%vWc "N}oӗRz8Foo`xYL7I7; pgM ntS~Y#*x; Q27'/y}e%gʼ23af|>¼Yg g&L{&{_1MTz0uWN:JQGjb> ` 9=}DM]&D-sCrL=-vEA.֗װÔfa;9>]9u3mj2ܐgfp 5mhmA3h7mb554x ~.;JKu$K_O֑DEs6|3R4{vG=Nz}d`4{G̸Qċ.!)A0g=3J CgUHwl.K䄁.6fW[}d{e n*A%*y0NU#yH9W][Bt7: f#B1j_s`glrs\3_IQ+S{CQQ*GJz=^x>({}@;A:j<QT+,7B MZ )CNyEet4D{SIz[EĖP 41CJf/hY M*|H;$U+-g@zN6]gݷHK|9j!_`UNA,dyuJv'j;`5%Zf&%y0EI%3ȖXR?p2Em}R.[VA5AK8FV4.uim5ZF|p@Yt8[l0U>cB)7j$0IX voM:օ]ed;,(Iz 8M 3uD7KykYW۸u© C KͽThrG,e@^i (WKM)c(lUJAgR5gZ ?AR+RgL3pOtf&pPX$& lYh0-* MeV8e*"*#dAWDhL$xX$ÂtU!F9|1,VD890\%fJiHkq#'Jaoy8f>4v@46 .Ef9hU<%%m%KcP6>9P0.v ]ٔ{'_=ClK*Y0R`D$4|[D||>Q7C[Ȃf$ab5X )&>1S: !$X,ŋ^.^O$ |Ox 3[eɶh|Z,+&: *٫$RK YhdE1 ^ &Aڗs.2'mC]tMt5Zn<".O37 p@mcO ֆ-I V2zo ˮGWo]]aYŮǫ0GN yfJW4F舥CFev\縞yǾ`꓎mY5/\(\Q( +Ry WU0Jy #$.ƀ(<{&[ K^JitL*"Q* | *͸J&7g0W V0b:ǣMŘltZ?j;ɮQ$GN}?4>O\zq,8G{;N+挨O_:NWÓ~K!ȁSJk4\%5]TP.T|T`uW@$&yާ?׏8SL~wp eUG] B[V,pgykh*К9fWyrǸ=Rub!|J霏t*l'1$WѪ40#M%PpFox2ZsЊ"UEؑNA#ɝL@Ȭ$r"hq8ψŤF"äD+`31P)JD (x8x!KYv9LϽ3=co^ƵmXLz~QcUK4iZ:< IƒOś߂O|ځ9MO_xS|OJM޾|0ӏb~Vkw/cк/~f9`;p)AnmcE]]T D4JaZYu3^{*I&&TA: pAL }4]n[GE/wA$NJ74n1Q]ԝ4o~^ FFo5|gv=Mk{~Crkt"^V 9" R[;Nl'Mj=@? פKj:gz9_!6)}n]S"i*xȤDʀ+yė_dFF05>6 S.پwo{xߵgHoNX^;M|sX²Y˴&c6DL,IE)1AbX'-(I6]L":h!:oco~{H^{e>be/]z \ |w.4||  ,my۠\ny_Zcϯ%۲w[G7[.a:!!B+^iγo"qt@KT 9RKvXvN; ;{Ith$(#L,ZU L1˵"iUJ3i`Ry(01 [q y#>k\ 'b[OPx+zR@~G֋8GG1_b2dDX$Y;TLYE$gJ#%AH$ qThA6; ׍TsBpLgzr)~aYF;>y=QD5Ncp0u# `T",l3kQlM .7q~· rtHxjjE `%&%>rz(Lk`8.W k_-&n7*2,qU[%}=+YɞWU+Y{ w~kZ<|szÜVU6vԔwsI^wo ZX09{E c&IϭqkӜ7 )Ie7V)9ET TH`c)_A,!dġ;"V*XXlf싅0 srۦ册 jOpwP׉o~w~]~пLqĎ7 ȉ'|"g3YdJ ('I6ܙ:z@b@M&I@J&XDc^Ҟٍn8K婠vٱ/j¨m:Vs2jƒ1)51,{jGVC4=L Tփ20CCy4R%ǘ$bB{0+~(ǞqYDlC 3^P2Xc XxJ-aB=Buxu(9&$MEhqN@d^", ) keD1P4YEΗ6UL֟#z QۇK:,9i5_J1C4WH*F\uP]\^Uh0r0a*00.jG,ǗwN\Q gPdF8j00SuV]yD/ʏl.Qm٪و$. dS^Z*~u{ Z o7 r6*>zuW]x˝WMTqתha76(G9 JTdO-W2d؏ AZ8Γ Q,OmUW~X "5VP1~5Qo9x; O/0rlM+įuvp:./W߼h"օC*"|v7P '^#<8[WJJm͊Ttr4]9{SjvP? .KP9oSEi))i rk~kۻ?5am^RoU-XќupĞ;m ܔ&oz10~kK.+FVgZmelQH6U7))VT94A ) dY6cdBr9Mǣ~Z"m޹Ź&mZJ,:麥YWҼh䎍}nenc86oYR*Zlږ3_cS`)!XRsײmn.kZ7hCa~ 7j#Ӵ9ζ i$Y2.LmM{t)y0!U v9;4rY\!%\V|,wiN9% UX᳁,fW -'WYJJ;zpűZ\˻WY\q6pU*Koہ+A(Sʭ$WYZy*Ko$ư3+Il*HK8uR"\)8'cf|lڳ:\e)-W7K8Blr}d9[O]ݥ[x7SSZk?i]4@oapDG5T]jdZ/mnMQdkJ0;=,.?Ҟ[iCd؇v 5/@ 8Uֱ7 W087wb,Oʿ{ų.ӏ]xqhAЃ&.Or"O_߆Ѽj,ܮR^\.5o ?٩ׂ#' ,P 9(ta‰k)L]V.|fۤEbr*4` @SbhG 5OkyV*o!|`19:s|fQu \\҅[E 8ҡ͡ Wk' Sr,JWϥQ]z{НɎJeR߹YU%En_}RRݢ楒Cгu-|DԳ|GM2]j&mwS蹤O&I^ST%,ӛxb҇UL8crM 1A)FRX ZMi | ߹ڼӓGxmQ8խR-({uDy7yF >oh3%-tzy:zYN3hfbbX֝nЫWVN)^X)CN<ˏ SUbK.wnBZmK`\*Aѡ)xtŬ8[H5y ztrdrtfEh#2K ;rKg`4ܨ5)xdﵙՆjgr^ =I=M<ז~t*=wR: Qtk_l Mw7W`ͰIwp<-qR`o ]ew O^%"\XDX'Rr8V {%0S.zc<^%:8|k>N?L93ʽܘT^xvMc#0 cX;ͨꤝW,ds%W{ͤ]+8Gu׫jlEɸ}r]Rd #AIiZ˾9q/{sr^ÛoLWFopJhRRI'l2L|EcP%&Yj̞H$.':Lk&.XK˭4$KIl1qx#U|Jw~oz?<ޠX~jn\_8ݶԑ×-AZvTjIZUvs.^:_~vܫGBLS˔qr#k#ckXlk#~;-+OբRqΩ&O(6DRbY%Y,U.d-Ӛ `1$1wu`s F8B+LՊӂ)q6.ݮ, _PjboF5=KSqx/G7'ӮbBǎmcGxbR~y&Im2w$~^zvz.AM DPa6h9DT8ċ6>:$L;o۸)~^?n7ryy|:˼:9vǦ`0p8eR2Ϟ~ʹLu9B5:jO=zvpeںv(λ"EƳfxԼnI_)E''=#ERd=LLN8ݑ&Yq<݄ (RZX" q ruSb17N56B%W %L@έ!9x) L`hR307oo^^ Y^/L 6GO0mm(mw+Y\( q76ALj JXDTk9( ~:u:R?p0 U(MVU׊2QQo#޵u+r^lǐ4)n[M ,60%W8;?~( !|B_{Te^p;As SȡFt1^@}ԇC_J_>W!x#22\DN+|ZRGBpn:U6:,0#M{+%Y)!2 gZT.;V:#gJgӲp~s"uP2H x]ܾ]뛸P^"Md000h%fH61r>;!Ks"d` >HB]J\%5Z&&&=&f#-бtF-`|lg.)%.spJ(LX m"7lbbFoR6 Bi f>p.pn)bK̓)Mv.[*rUh\5QOG_Iƀk)oQ "p@+cqdJ 8"R!fmMR( z)dYk]봭 ԻhvB/MW&8SV 525$'k/5gOɪ>Q&τfe,f[RH3C$cH2kuN盃0qR4o,xie|oz}RߋO{_s*>|V&1UIdi$s&3D.r`H7>*M4ڐAۑεm]3McI~`9{45mo<} ܾ9qֈD\|^]:p2E#\j+M)[;]-x6[ro-@IbrN ܑs/"^{%Stv)Nm*{d9: Y>q1qcPAKGBs]?uEf Vx2M彝7yC56EcY`WXI{lHj[ 1i-vF kQsY7TA۠ܔa0 U>'Z-<@S,vLWmxXsV#E(]aRɢU#)E\$.2њKu9I*G+vDyɴK<Ĥ]9vY@, 1 Q E2(# Y&6$Qϊbr"!D$M0 ]"&]ZBӤ?wM}VOvfh69oȮmkׯNω'gM;?ކ/5< ^w]9)*ov9Z 24 cfY :vglƨ_`+#"GuiZaNκ#SAǥ2daH!`lxf@9mAuj$ke$Br}@H:in$Ih KZp%ջ":.&_gg\rW\"qŭNo]>P,k\tN_ډk6۰Q^}xqF?L\b"(_;ʊ}1HjQaPGjCЌsEvF߃F|[ w=fiGȁS,;HN%`ʦ!P- (eNNM`3.O 8nT5r9/-_1HC9/499!*'+Y99K)CTvS/jDA1J`AN8т MɟH9*@DoP}< R'(R[tK8۰?g Jhfb CU24&:iڠNX Z|BIϹqgaly~`!#v"i)0R;P;"^r1[A&.1rZFʌIBfoD)dT7)Rh>+.*‡(:=u!!ͪn,MN/ZOÿ_5_R!^WY4pPjfzWQ?<5Zu ܓRV^0^ }uŐ,i)i{()㒉Swd\=7?<ߛO{QN*THc[YM"$7Ӹ aJY!s~9?Ich>6^:u 4O4r{ NNW/j JcMn5]$bb&&3Z'K4B*}`W4j/.Fqyxu6;_x]C "%1'`vl'Q5~>5|_-wF/u-i%P׷t+6,Hhc|,XVhxEV\꺾Q좯ӲÐ2RV>OB4U\{[~T^|Qa Dg|_|(~?i;?/AY bT4FIc?nCտJUMG]1 $߼W޼;_wo_:7Y :7 ݋zh?߾iMK[4mev5c]vyM.vIl 3]zـʤ|oN[\/n.k_ͷ3Eb$yJHGSi*(| AqGk\%>e61Zɀ6jQ!)C:.|"sI\e,CNǩZt6 T= jxFՌ+ GZbj6 rՅ~ԷshqF)_Wί _5ݘmOp8y8%(km2^1bkZOi#2-Nu uƓXf UMͿ/_呗/~+oYХxr~/}pL|0;RPG\9ݐf$ eIBT lY#.;_op8h͕Ш%to-Yt: ?zY'[l8ݻX}ihav:~i/5olί͑EZܭ;l2{@ Tjsv~U&ث٫:MZe=_}-rսaGd8$KdX3d&P5 `YPC%}գG3?a#cUYĨ&Git2`2((`x)26TT ) ҺD)o6PA=%b{Cцuټ@]_Bmr!eGlns.Kcjl#&n /"~XHc!lXąݱDĶ[ąJ.{Zg**pUgWhw`d3pU+pUpUԼgWJ nv `wg{zW +[=\= lA jgBzPgWzZ\fajFя?-7N| T1ϑ0AxTzqG-O3ܾ3oiwTSU@=!x]+ j4KkXrD- +Zoo=K\ʝ+, w&hP+>hP 1WVZf.L&) S^ןUk`q'?{6  vC. nf0bF?me#IA iq ;BP4 A.C3Yi~:6PUA]lp)Ԅs2r|u{_w=Wg㝇t_riZlHz +Mn1yT6]h}h6AE"R&!r`4u`)"{kHPkzCvB;CYX׳u):* ᘰp8qrOJD>yQRίk9qv>IϋrlSql#'4<0soU 6m8S@:Vi7>P|~ԁ|\š#G]l-2{m鲛Oո9vljX)`֨Fx>v`^OwxwH=̧Jw]Į;dr-26l l !=tzmwTx?wJ|a}Y׀LLryå T3cĜEEPp4g@WmyOZf?d@[_>_x*HyAUP (j-zsqws+Zpһ踢 saVȔpJ-d}%Ö' DŃbDFRB; i-XYl9&0<6VOe&#QHYu2LwZ DQbhnD|*9[S:[ƣp[@G!!X:9.qذZoPiy}c1w:հfT]›W=u9tHwbnCy=h~oZ]uV@u1af?(Ct^^]= XS~ϮͲB(Cɼ^Wjx<+xJ^pE}Y7Y!mv}tMބ zŕӥ9W/vI3+sHbwmMIs:Hd2~UG"jžCx,^F%Qd^d mc:-S&;7 ߩaQ-{/>HN >}ո~^_䭑5.dgG;ȢgmV5#\ H rH# be}0z)#"b1h#2&"b?&{z)8jDTl{>Y>mn 7ݏ'V[o{]L[.E]_QexM3̜,㛓,Yydc2E4^JJ8%\b1(aH|6A2|slNn68^kAC%{N >5g utw3;xD8.XS)f'ōaIo{tQ/Rۆ=7D۶!OׅW/jNʝ{b$vn|OKϋrra}#Z왦S*3 1ĽƷotˮ{d܏zj[gLRl;m\r7pVeFQ-ڑ9m}金vfhtk[Ud׹3|3(NǼ-NZ~|ޠ7~m#f(˃IE(o96\&(CR"`\kJ=3J CgUH~ǽ #c͇boYT%hYϜl$ C l6:X'B zzE'2:"r/9^1=^sL~we-.aHn297kub(pD_['eO^P};SrR4jeFHaI7!"ep)hR +TBO8`1waʽ@:aFhNPƖAlklTSJݡC| Ϣbjݧ#*[{hHKzS`׫A"R̓ 2B[q"7}0YoZ^rÜ9Q [cI"!t]wu:[`n1"&2h'Җƥ8-FHNr(ӠSVJqTQܦ~tdXDcZlZ ض5r㳧R蔀5vie3=[htM>G>\y^4<ϫL/;_L%p.s1˕E"Jƨaݣoaq-̺HO{9k[A:ŬH$42$w#|BpTx3KX CʤhㅑJ vCP q`8-RLpEpː5rC:(Vz?jca;,ڸP"QsSP]=:yYMHf^M]N VhX A;cySe91)gc<%í Q0!S&a`T1kQAgkY˼9&xANSw)PgL3%wO tf&PX$N ϭPxODhB08. :q`UD&zUGȂ*јI&reӢrk*F^<\1,VD89\%fJi587X`ǰnP@LS0ϒwiƅ^f՟?t| .Zz _L*.M*ʤteEj2ٍxېTj0o "RUpfz}_Y(b'd}&dEE >)"Z J1u(?cw硯_%P0'%x`?P0ZXZTa9 >az4 q"cq:? }f`>VݛR%i{=B0e\t-عۨ49Y(;gbd67} F&?f }KgUdb_KSo՛7y`sg߻ƖpC뷰 nl-55CѴևO0i.>']ƫ:;GV'ljJr4ka#aoÑKU?!, |auR:=7De~~~fo'3LT9 B2 h8(s XT3׻& ɫ޾{C?՛>\a۫>qpU7 $s"+7rkS:]VHgjxP: m0$w8.\ڎT<W8 R6n֯QrΪ/7{Ia4(*{ do1\r,E<{FM/8`5fMIMdH7:2N{z;B`Jү85iZ(ڜ-rm|̓Oꔥ%6k MFKm ۋK;I߯$$YlJa^iGjd^ wE%KUŗ5] *rR4}:߄lCߗ[ Y:R7a\APKY5 r)ۧzIWs`yР%ڗAW/Ai{wM^ݭG?=>5[V&en]lyvC,Э#-?({vɾ/F ZTRF9b U`<񄦘u;'qwOG_jH34\C\vF?wD?MM bQ[+RYe# z-r]&c>f!Yb:8OAU(7i FXp9mHLЙ J61>3W~VkΖqYcw6|'_4sMvTlŇzV}1ud^憲η1_z@JKgE(:Z"*gT :,JwLq#Ws9O"[t*z]C9e3zn*jɨrdvZly^|uWx+|ޣ\W4)8]KzϞMc9orqmCMe=m2[UQyiX 2Dm2GuV0N4 J+r%tmMNd~Yi S\` ds9rIs-= |ߞ՚Eֵ,Ex*Xh ZT]0دm]bΟh42οsF!rf.PDiIdH6b>2JI0@MM4$Cb2 . pN爙Eckjfq\ jWOEmW]C`l\Bhc^93˝,Z4E13b5ܒ3IVHǁ"⢾ Wx% /1q!1\*C\>(=>0gDrm]ʖw"kT@7p!LH!Z@Iɓ`7SlUY0}Z6J2. ׍x͹rq%% zԉ:咁W0M x6 UNpq_x*x(;+]#ʸهH 7Gŵp 5լTjVSjYM5f5etfS&M5f5լTjVSjtL3TlYM5f5լTjVSjYM5f5%;+YYkYZkž,rA0 %3Z&ֿ_k"M䯉5&D_k__k"M䯉5&D_C`_k" "6&D.6\l"M䯉5&D^o6Z_ ύ sm.d.;4<$WGA2FD ߻3wctV8N=~񁯶?Vgc[+℟#BB`aҐUfV- /YX3v"{!TW=B="'$8[o uB>$xNZsļ{t?C:ƙҒahjET.W\\kWߢUX6y49>\Zn/hr蜵5PG2z!p*c2ge"GJΎo9$tc>bV"DOch(#+ D:PYù!@OZbU{Ѹc?hzd4/c5/m $JMkL4j+^xpR9BbYic4Fql[MM1F#,l" D";P{H"GDQqHȴQ0W͐SPC抂LKB`R2)#pO`#PRۇUSY(cuv#ϛ_|?WME|T/Pk!ug~x%*)gm ?N&?`UJo. { ;p1*dqpǑ,k_r%G,LQ}K&Rrlaty0;'<=זZГstd~1 &cZUs%02p|^ϹxIү~A/ѿ-Q͛ӳÛuEz nUnޣI821q&'%vfztMʳ 0iQz0/^~'$f4ZPW9_<V뾽H4Oƿ]h]u?c;zNn#Vɕ9X>&Oz|,Xfxt=uV?`|Uw](vÀK)7V4|7W3VWwkW+>T&W Zv|ۿXf;4wZ~p')_x9yUyEYZ!iEfL 'fT4 QkNҷ^mXKכqa3ōdfCr%qCabQA>kY,SeSggUmL<1VZsv^r2CxU_z%U@aSJlkd7HFB$K#vE[TB B7uןBl~$(} HU L/WtT( Q* bcJE1pI-ʼnΪ#(OR+u!w (*D0hrX>{Gz <ˣ(UL^-ĩ(Kl:g d1tSYu2q*ljt;9\vtElE3}azn{!grf8=+p8b.,:w2M)jkY?[}Uh0 ܧ]tGDDmAq2SR ps{Y]Vb8V. U/fqȵMM'췌3_K쀚Ac.K:*QKn^z}ԝنJl89 {JvTrߵ}^ׅ7WܶE7J%qW;Hϼ͘Jy.*P"u7zM\>WKeei(~jyW_Dܚo29rڷo> ˑC8Voo,Z|GY0sGZYbNS D,qRR1KOVDW\.nƟW\?\GkE<|2&hɂHgiTH12*r^+#0nd'ߌ!,"/{zŇ8JP>hKƪ]ҙG|q'T~.]_zOϟT/E~{?p+ox:^>qy>!3\aa㯿Z*YN$d.clF[ Y3ʜBY{H<3>mkHVԻOzhI:æݼl+]=L=$8EE :ʕ*uuB=<.(]E(m̢LR{2Ij)h_st%t-ZePb!C^QCSu)kdb #%sb\*ڳF9x992&+<>ҼŋW?_O=R]P.tP5sZ/+`:]a!2d((odOפ58O@B9(vSm/N"͇{'t|9xp슩+;s!CY]B2:U =1mW/h5f}͉ߴ,[Ebn|V d aԙ I.&,[Nݨy{7jafUZ &`WKl1R1*[du R 1N.kYٓTy7r -9'NS .m+GMOuYFe(l9$* l-62sQ~A<|eʅn)B7!rŶBO\͐XYk_IForIew Xɔ+fc"@Ej0ɗvs1Nx*Q0F@Ʒ,Zфse xxl\ٮw ^^4lxل9p1h0Z{۞6Sx,o~]v:7=,RtھJ}UKuЂoK-u)B\}x5y 'ALJ\oK V9} 2J jH&piߑjnĕk~/QU2A\ʋ`{?Jߋj¹+쀮~Hqe,v2ɣ>*[؝8`㻫>:P9b:*w:D,d`^zN1,8~ip94b9||-|w{y39r%嵜b""uMG2R%ֿ.ǧ2zI*TFWޗGh*A$x.LҿFwK9zӧ|}ِ__+e/k&4,{sCx懫]?:<.3 oJӜ xK+D*z]h"ɇ ls(t/c`ʎMɟ?ϫ 1.{{ۜ5@}x`PRD,a,/R7]PZ `]P;`E]D-$| >ƅ~QeSz2)9w.('X .6Q\kCKyoėWw47!ۼ'sک7?,LlZ k[E{=+i~̕1.j:UUy7D!ms,TIǜ1)ID9:Y!ڰ**hS^Á|iVTHAS)9)sL4@Ѧ,5E0997rtsʗr{7tw}ٝm^ojr1J2;=gc6kb1bۻltyȭMZyw]s"d׆_Soy뼙~7o|n{ߌy ##ܾƷ[[nWϚ is;α\<~r8pe[4rˮFl.Y)\Wyrj 4UmO-uֻCSW>kLuZ&5h?NSwo9I9LwsC ݋FWѩSsFvhJ, 0"5ߧ4{``RFa$P!;' 9l!+*Xl荜=me=}Y!j<~ >:?_^R_M&/kàgCL ړ  $X[(pJbe D8Dhky0r"81 `Pb%sS!Ylo7s)< /D=z1<ϟ0 ~lmHiCz6oY.8N6w+yR"e ɇJS1PLbxq*:'j |&8H~$x)c$&y*PANpLD(G*@.תs͓9A"y^|gFp\XrFaRmLJy%ʹUAHE6d-I.:Ӏ7Lwy6L?cuhg}v@QߚeMg"{K}N! \L,|i[E"[FĶY9R1h1ޗvHzN+rD|p~xs7ݼw+oo.e蟦3(E W}vx'3]. V{,:u ˩x'OEy8/+vW˞! FDkŗ(7'oȷ'!cDeWQB9ɥdb rE `,̪4^pS)s͞{=P7__cHXp:s,(SqDf)WK&NpA# T#/%z9X&^%g7Y˜ ]z\mu.c Bnr7ro-ߨ\ 9arP3l*d:r1J8vwqA5 stCr ai]P{B3B B3JEg.D(ShLaRZ w c%E-3ePb!C^QCSu)kdb #%sb\*ڳF9x992bz},Wns]mnedLtzc;gft k` /_0$`_=qqG$#Q Pإ"x9C6}Kz+K9d˶tDm𱪔,)VvC1zJ@V5B:uMK6g9_O꒿g78οh\4Gޏyyr~/?bK8>h#yU`%#Y 8&YT]yoG*`G`<8$,XSBRzx$)#F$ru]]KcBAsx[Cʊ,I0h$u ކ& 'e4Aqc ZȹcU".qzᜂ$%>s (z262l%'t0# YgF a4C@=eG3ŠM"#&0^thuG'r7>8vʍ|hx & %W. SFΔDTZ<:DpHȐ1(b$ /ˤb\a ́Octb=XC^/qjih($5)Xbuޠ]FedoH6&^܄ܦazA  /Ѧ& 0-h*׎GlDFv;ˋQf/$`=(1DZ'0DA#N6 nHB BRTb_b\%#tI]ӴȾuU4'҈ [1O|<%flI?ԧPdwk' }<'Pv?`m󓹜G]*bR kn3'`G%Sg2US7i F`Ԭ ){ɐ+4|jAdoe61kYl;Fh|-*ʽd֠FjFj69^sv|}ͨ"K{) q9f(Bٖ*rADT1ʼnݛ`w:;εs D٩w2 scPQKOL&EzUh8Źg_B{\S4 ,HTœJ)@Ai8xq)nz*T iQ۫@ތ{3MލVf-&o^\:*L]vj;\\{vctQ&J=Z]δqR!8@3KUHT&}eH^p$̂w'4$o/s؄A:ObW '4e!%!!I!(urg3`6][Cf`N)ŬJxO $K31ƶEzGMVReoĝnk7l,tk乗LF2()zNALsrD]K9;IM|j/usm**zZ !j23<: *ZE\8]w6=a~m &b+J.zr,eLBD9YD&'͵+H 62F2ĬR [bm m J&g\$dc]||>3O?K3)CI 2E.*ɉ%H^#:Hnh%ACEI0@MM4$CbЄY\1fdض/"g=b ]R.w[k]ڝRgJ" Agy`xhXd vfcY1"#[b{R[6>q $CEGI")E6j  Ubl|q~m͏ X8{^".d) /!1\*C\KZKH`Έںӽ"1]:+FP.gB BYs#C#iC!ɲ5r8ӫz\dlmle\.v2\9 d%% zԉMl,Uqt1qC.>. 6;6C2 lgǓ|Ҭk M4v}T\D1%p8YG Y0eku ϫ,T&)JZ E)ɭrYgoA3C+R}xf|fW-R*OT9v^(̃謭zJI 1ZXn?߽r=慗16h$j )*eI";P{H#/gBqk`,:Fj613#+Yd1pIH FyNaY&%|lsX(uY1?FSwyRga-+9zH`q`Ib|]eaI^{P9~}u\ͬqecRWOS+d𡺦Hݸԅ}nI(f5z㒉Sh)~<'?羫gfٓTErwtP5o' r9[/5ǣ2q?,!t<Зx)W浽=AÃCfmIGt59Ib1q&F3Z'KB7+}]]M//Rژ~yלx}69xsY0kA-9_{>OVd$g~+`lYK-m -,kF,oɥ\,iS |,XV>EdU y'(vq9MGSʭmD^\QX)?T?}}=Ho{iUw'+PăT-pt2MWqݤJꪦi{*NIB y/߿>׏;z#.ћ~skZBec)/&ᷭ'F?M[hijo޴pl׳_߮f./i۵VE$5bŇ/pd|иϐ<MG/~R?*Mo7LV/np+1yU֊"V(aEHGQj IǪhe]$=a.u>v`Q#cp68FYx2)q3ˉEeQ S8ONw%Ҭ9a٬}kf|ÅmΟEwU]UWKjq `uvI{=8}ƅ9긅(dt-5x@S3AGu){);FW4U{~wZ:ar($Se#s J2)*KevIsV[C: R v,ʎcQ cggah-~m+o_;gN 0.z%[ v먼Wġ:h 肹[ ܷΟ.B0Y!)SrZHZ|Q'c^q#0 b ]YdF87NG 3L9[4YJ$Hms|r3ߋ;"_?DI/IaEMbt6%mrl I'266^Md8{g.1z΁M1GȄˤ3&"}D23zk-u/ULHд!r>w\ kK5=:G\I|!/}5BGKuxFgJ"*a-O[YU 8$dk(O: xEfikԙ(P<MPF3J LͶ04Ӡܰ}75~#Pd8:ߊ`̓moRl"c?݊ZR;OK ECf`i 97|6KrkԒa4 @YeŐ}t6]z՘i`t :*ZlE__5ALᇳ36  _l5DLF|/?V2gӭUtet`Xj t4+[Jf(xW¨ ܸzȧ$LYˍP%`EL )B:Jm<oE\~xVtま]ʇ\4e^KyE3>9M3 "Ee[>eNE P'Nd*Y b_o{2$"P,RD EE)Ę[Tɓ:g}zx2ߤe}zۿqUvo)t7efy%<_Kv8Ǔk ~`gJf־.OxvmWqu>`+?}f:\^~6$v˰J!EL].z~VO,7Swyc*@]57oYsex|ZCC57?5_W4?P`^Q&v26{7ACշMA^`٧N_Ƿ{-2﷛)8UQ8sCzH4сT]l娔UTAa{֎)^G쥻uxyy؏\cz{^FkUtL9#Yd1Q*807VBKz-cAlY2_=(ui_1tKn L=^˭^tY/ HB&<5AFIdVt[5}{5[ZkO}[*5NF= IjRBqA& &S`.zg"Okr cթ߭^ o#.;[ԅoH}2QP)OTrtj&tVؔeVH~J%1f\Qr : : :: Zn}Lk`I:r&uS7A,FHAx*=xj"D2Xp(  $#1(Z$l7|j"`$0PtV;F[P\WI-/^_?O%0(k[TyEF1PYWT̃O^^9X% 4I(鋆+&~į&~6,\;jCPh HM(R%5:I'G:;%38C!.xP!"euy#M Ҏg\ 7km4m'g-F֖l{t~ X[϶f^5 k.=7-}kw?Z%mB ~2S5}tՔn1fg׾ˮzϮP+/\Ϯϒ0ky!ok.<{ԾiSkE[ HǢla4&Cbk̭47 &qM&1e/6]I()6GhȅO]e@BWVu()tut%0#7trBA*ҕ>H +))!/tru(*tutd##ʑ= UNtjuGd "oo.9}0_?x;ai|; 5Ew(`K)^c`dYü`3.:x--sw{݌|('gn2OYhc>aͺ;v_АH9w*!U9ǮM3sT@d6j%浼jJs]oa^ɛMaD.Åpy.c^fjV9_>#>tL <g/ &㛦hJbx (a璝 q*&{=-5Ls&z3pyo2Z)>(uY'?ƙQD>}ˀ ]e7!((tut}JWXgQW$]RBWoԆUNRJ>$}+zaa0t\zzۡ0tJѱt]BW/zGtBW[tUÅNW%-tut4B2C]mVΫQDW\]e\2}GNWpO @o JBBW-U]ra*tun/th%]BWGHWRDWXSpA-OeAG$i +zCWBuBHիЕ>+L{;2Z0]+D ]]iF= Ԏp5 ]ev2J^ڏ2/0[=^-%yݽj`vܣiD^G$\X~؆?Zls7? tύSx we?3DBzN*cv(eL6@zDWDB3\hM2JЅ4 be8I]N/pB{m',d 쳝3ӳ-=<wk[K%FwkEFW"[9*eUYю9.d<9ε #6K_f{os4{Z'8y@Ik Ѡъ$Jlui 1:NNsQp2_.oNGwyy ΅}g!e< ;͛S'ȣ盽w'4~w{{corB㫨gT~5Q^ _S[uȳæʿj8HkٓkUjx7[kHwB*vo|}c6y}2!plq뜭D.\Jxm'HgCWSRy qϙpw~bXVಓwXY q쑔n~o>PI?L.A>>Xc2Pܛn lE rHȒ#9^Ӷ$;DZ*mK3? pEv½Vlv2Kgӏot=ݝe3mMSW [ՄA5>K)sT*^sOc"Y| Ю\~eokOZ|cUā/P=dETD2'A̅ƒH-22}f61}fr _yp7أY'cE 姛՛4O)_s%y–ݻ$X$B6y/D#ddHe!tY2Ag$J0:r;9O"z]lQ؜<7FZzw,_Y9\o+_b\OaU;Qn:Bqjj^jsW;Y}uA\&S7J&ކBmቃ4_V?)hV[}nWPsFSf?{\zcx[XxtlטHv`zs"8Խ6XJ-*{?PEUMȅ9c?wdqw2gӯ9KElv&|w;G9ߩgڛ4]J=lzJfz +L3߯csρei,nԙe9ǹAغן Jʀp+KyRɐNӠyl%{R9/y"`Qy ,!e6ҿUv|JZea Kms"H\d3Ls&]Qz6A9Р G.Xiey9`4X#9clA:O☴/BOQHR(pe`Ft+ q9& Vb9&$$FH%%堘0 jgsن&I#?)5{V,w6^n`.[Ջj=ZjV Ut٬{/b%A6aAp]я%SL>`IqFyRFs%}i܏ⰿ}a{:f#OW˕iҿshrDIжnu2U4#* JMkLtVhcJEriUZvba\_ji:DED#YRD&&؁ Cy%iCLL: R>10CFʌɐB`D)dT#pO`ABc)wgŃ<~(^«_Ҵ=>͘sVi׼zk3J`_eyK3罷}ӻH3!Rj LJMۛ0 "8w;dqʼngܒ!PT㒉+vRrgeޜS˳4t_Yd'aFG⇌@stPd5kBxC6 tO[_$qqCtӻ 4%iziQ_BUJZҶd:ÃJQuMLL/3Z'.J Z'i4z(ݛ/;"q8mlzbkb.(?_Vn8zYϯV;Lkm .ۚ8&!i4nFbYǣrutW\{겓m}viZHiXyUjWW5q]{8 !o^o7㻷߾͛\7߾u4/ӄT0L,~,6 k/?i ]5M-h.{jƺrKۮB@FL‘vj(jo 7W<$}>MlC$IH —*NkEVQV(aEȌI<)FKIP۞/mXKՈpHbF2hlpYxqK:'"Ce9e$6u:ԩtvu,#O6g8z]rwy%r^u |pK%z'<:S-Zglڸ#Qzp|VFZ!I#$gKk9G!Z\bܓAap.jf)C6j1cpii"mm06RQPCiԛR@~!F9nMܠ {wo}a48jh a~y.l˒G['Yo+P֢\mYh }=iZϤiic\ӊ#ҬʄذhӟuRq)Af+T XE'rxӥP_JEoTY$&GiAeh2VQjrGZ#H#eIЦ0kR*F`q.P(_aa2=r#G]BcN3"eäyTY_ȳ[]nǩ}b>i ݁zi"fTVVAgQW c0,j~-I<9zs٪dRQ;tGДetdb6@ ,oMHf=]`~h)h˦i,JbSb>IZ=|QvW]j`3"b.:^QQQJ7/yp=99GJVTrjw zׅWȜr@%wC6ݡx>%տ%Tr}Q+\vYRv/zdN;k,m<9gJ׹m>mHs>6>;va39u2ܚ rUJUIG ښ;YSKFЃCp>""1xk5IȒV1P+2 `'ʕ8@@*3!ZET. X 1,q\8-_޷})чu ',`NSqR x!Lb$Aǎal|0<0mC*샍(!)ng@:K. #&wFki\6H96gey$rYjL,XN/hS []dx)HDǮcd:#g?0B DŽ鷻ʔ^פa5A%)*E;:5nZ޳VƁ'D@OKrC02BVѢ$rSE35Y)J&k :.O 'ӋxR>Z<5A4(#0K%pa: sc6c,i"lBDN uwx p9^T`FI3Pҩ $lH (m2/c8IQq>oj"](EGԃ-AO2p";җ"B8'!A3eVٻ6r$W~q$Tczfw:vf1сS6EIɲ{b&DQ* Ї,ׁJT!?| aV2T Alk'2}tx-#8'̹Tṁ}O3r1o>Oˑ<1MJn] O—em#X&=~r[VϑLmUgmy:.Ed.7>Klw/WWx➮Ӓm./|ba)XUPW5qP|1L!?/[ȬSwa.B0ےƮK7'먠t:+MQ9D2qC b2YD-0KeKftR, ߖ&,M˕JSi?ž+z8(gJ4)֠%0cXA!Jپj§m*a ^!w{1zlV/+½Ǫ0JBB!B[C{f_dN. i':X1cv\ѢGGE½󅘜t؉!e ,rF{2H+ cwM=MJZT!g7"ӫ8CMٗ lœ )i,=N\g|~㣤fVUIltNzK$zORѳ~!Ϟ9Z7-I賡Ё29zswiқqbKz-߭=.`?I)Uw:e}_CyQ8r'2KXݏ)VYt'y= }t[)HںlbtD2K9\ ,a)Cu-'ClaYn>U*.ʆyRzrr|O6U rz냥뮆zݺɱs/d)Q@9k4Kgg$ Q"$]V}B>$~XJn iU[UoVx3+<06JB0%(A|:N+:՛CbYC;; JV9%#Q,%|MjUݾN58]U!,4UR{ISFLF/~-Pbݍmm17BTlf[<9;/m޹14`HLFDϽC%)$h2a)cuK wAKe| \gd X6dὡц\;8o(n7pKv|8`1 != 6B:H-|7p[ >zPsNAYtm@zzP;ԇޅ= 1B}^ (o$"l!c DFOkr@)u%IFF`.gi݃.jgP$Y)Y:=V2&Ô./e*=3g kk8+i2 [ޮ(ݏx9u6xt  >4z4UK볓69x 6NxU]aPpE MxggLBF49"PYWed4ErM"h (yI` @hʉc"7lbbF?)D6k#t+b+f5bKڐMM[a6!ɔš\F*r<hYͿLBIǀk)V "h8X )ՉEC,3&푎R({ ]{K#tp pm{r, n4' Qy47_a10e2qYmK:͙0[B2  d,fȑ(13 3t̒cdkuEכ0qVԷ\Y1*8_N{}skůg,ezUIdis&3D.r%2>*Mi!n[Iymtvx <_W ;9`7{;y!Fj-%*k{0IVJ+}OzUlI^e+)? *+8*j{(pUvJcL4zp U֘U>hRcWW l \iU\FҪT? "W$.\i%w*R*•!fm?TͽPYxF߯m- ̯:R0g􏟾n4/.6qG#GI|+ (#$gdNE\u0kvEZ#V5~0mapHeW$.PHH ++T8 "q<vUu(pU|HikRczn<=.]14쩣37aI'{ߔ`.t Uv_ 3t h.#Ťіhw~^/p~koiͯD9I>ZdwwJyW^_A,/:tbuWSxVmSu]5n콮M7v^oOiҒ˗$UK4J\e_8ZLWIieu7+^02QlƄgN*X\JƎ& 0,euEN=#o]q%UY&'ɑ&ց64kF0A#ن(AӢ!DCBw7zPVÁLkO5&_0iX/ Y/d[rk֢.|)flbxtdOp:XCirNaw)bFrOD?M3aJmP**in'er ҅,9DKS{$s2*FA D>&e4x\DrV&n#ӁY28[z̼ l>O;R;e۹nwb.?.Y T{3QE蓴9) r! KX -^{%Stv']\eoCsB$8Pgf'V؜7FtT90W;68k<tzЮ~5q-{fYeəE^?bL-\\4n܅IܙwgG[F;ZwVsف7D Ӡo%;RߠTd<qNQ|*A#...[[\JZeaK'Ef1\f"ϙt6^t8[BqS1mJ}nyz-O&'﨨ؘDG꘴©\RQ8eV0Fh#*jfP0ԞkҀ!l%x-QxR#9(&1̆ʊZM5TDKӬ'7=v h.'g'7<]~[βg3odJ n|Ҙ<=&h,M竡=CS[^Qs4 4A@J0CЫ` sQ AWZuĵ59yfITQrAГ%LY Τs.HFjlWFƮXh+cmXXx%QEa'崇y_akr_\йxz9>㈝o$Tȉ'2d |H$$Q" ,)E Z!b9266A AɄ&b P2 v%Èfb jW6VFml6W23u!Z QgZ$6B !3b5ܒ3aVڸ@ǁ,dbٻFn$eqw6ŗ|L6`I |Z˒W-_Ւ%-Rk,m XMMb=h#"""ED2 ?.$G:*pk쬗R?˶m-%"v%bMDl`Xef'7YʅB}3МQ&$MJD)!2p(;IRQ k2J8P4ŭmg%l R_Djr,uHVɶr,i';2?ThO :l)WaDp4)qWd{"[[C޲<=yHӴʻr3r5( rz=w_1W)[#oYe-p.9@,,Z.Q^DAJ]`+{u \,W\xWW*C;}&ɊH`D$$fLr4X$4*.n7^aλ'J@=6Lr<|>$7 9ԣޔe\1#Rt^ڬr"Ij̺)듽 ԇ9MB}3%#SJAi4E9V9;gq#DE/s㴶8a&*;7&gIB"HUZ(5J[fpaq;Z_ng^iK(WOs7g?~ԻÀ3y{=+.>\?n0q{l*sQ8lBzrNɹSOlY+7;)'(NٿZhQOʕ qyB96 ԚOsvRn_ IfRy=o趲ҝ^ƻ޵Hբf*)tvv:>]`@ Ω;NFGCpLnȞ5/0*[Sݺ7mMge}EE`3Z3sei?[n킑7 T?A4$McO 骩܍,.OiC*o0|ً._'g^gliZ r?2? \>?>)cozYlM/($V炨"o7ÏrZjvTó^P^ WDkWx̕X5T+__& /?/叟 eˇ $\iX}'Dx>yu5lеm~9~zد֌W!}1\4*!&|eb~&W;7[NFkZQ z<2bWLܺ)I0.E}}'I,(%\2Qyǵ&I 5I11ǡiRCjmMlUn͓k\*']U/qE${5`Bd|=HmdJP\Xdp : %s;-F=w2 ;Y @ђUm8J;HBJ-PtA:}K(pɃ0y-`*"KE N8K#TzU%/ΐg(A +HTȝ "锨ăCx j W>FSgwv׷ԬXz~(H~m(oz/TQ̰35p,uګg.vu`y'.򯫋KقʩO^h^8XTBB" S9"]C-iQeUQ4 &;ͩOv fאOI`ɕ#+fm5LR'SVI$y(uŴj:6kn۬b[ΒnNv}Dz⵲兖CLot=~̛/mϚu¾{b-zW|iΆ?kdM滖YeWfEU[gJ|S"#jٺn>:\ͧg9?C:Q` E* [8*De rԐNS:8M]pf}|Z1 * ,*b)&|Hf:^p6zO"seuyX8@_'NKQ-\sE5,aN%(BJRě(Ay)01g ^;B/@΋*زbhW5u[ą/8cGv/Lqiȕ/)wx[\]ߖC&-+ж@G )@ Z]qJf"M1rc^!)qJJ%Ѐ8ɜ9hlnI@:c+ Tx㩎F& DphiJ#A2mM+ l}\σ"y W!{ Ms^LXAG >djVe3̽b:Bx)`ouVfiO~PBOڨYǕ j]Qx6aQq:K}J[m: "q%5AF\Uj~*sPWoG\S~Lcꌱj\Su-(,\#wN̝s91wN̝s91q'fjkmt!`gI$?[-=Rt-aئLv{P򻅁AVXNS99q؇G T`+SE&.(C5zƗ\9Ø͟vؐ)Ty橶n-ET4`x}Tp;h9<~K-KpjQAT¶!67h;cM:L:0U:^tꐎg(A\/ @5/3B=x`OY; WuLZ u o'2Xz~(H~m(oz/c u4n9QFݨ3 :7Fm^Nޓ$+4%}0^=6fzhO!)qMlr[^odIȒTh XGDFDѕfzy5׃W'с)+[I07a$]o%L zPi呧GvXQayiMT<(fH!h$x/HJ Nm+-ġN@dHID!e!x0k5f,`#NLFSѱHg贫Qco)bv]]lWwfv :zŲż^REk[SQXJ큙.9cvnz2]0p8{n J|.IO bRIwk yٺJ6{vqVk+-wd^vej߷ mkYP+lZu}֬_7 |\:צ_g#O* [oJ|Smfgnr ܧdJyC Z/@) b쯗u_-)4^Umvu`^6qR)Ӫ EOg@;|9cJL"[tΔZ`ˣbK?{UR7 6&Ok ~oQKnEVpxK^tJʥ[ aJ]M`4Csu(n%vaFcL.qsJZQo~lytlO ..V2Ksнs%%"X"Hpm3 )v9\ k:5F,pPG"`"RSFDD b FрG!eLDc}g4Ϭq9(^_Cl~6>FC`na?Nuf`1&Y[B<uhnI{FKeseJV.RLLxzpr{3'` ۀ X4VyGYXQ(hlr  :ze)7Qq:]Fvd =!"}-PI tg1(^Dðqc:-=XoA]WZsEyVx܌Qϫ:1Yoc=sT=fVWΰ@2qpX#d3ndzzqXHRI@Qi 9B#@>FN9G} s΄tyn(pb'>KESc 3Ø6ES6UROmM0BzQa'b]-hх}HI(Z>H^ꂉ'/*s,%sP a9`#$5@ 17ʕZ xSrIdQ{R))0u iNXD#p ꑠP77gؖb+;gvJ'aRŒF}>q㛒ᤓ]WQU)p9lRל_~h麴 Ըnꃷ郶y i8 &GtJ?2iEJ.G_);ŤEY8"U1LIP |q5gA eH_6F򾘒F&k+Ѓ&Lso&%ɽҐg;קBV1|#>*lpK_p#-f89OFRV]_D8#7t6\v=jx`:VaZ6^6}i5/IRT֑nZ|@'$ZD[]>xnnk/pDD㠵4YbւH9|:P ltb'ߟ'NGQ\?s xx}$Cq,(Cp! Iq)(D U!^7L/TpWr w5!w7LȨ y5VS|Ľs&u+=rl(J)9>E蠘+p]q&9u|-06aHn290?kub(pTm胏;>G|.XQM}y4 .D&i# 荐r5 RK:g Ztz!cJz̝vX-"roqoR0#4'(xAc"ngqt/wd֢x֍we2Q.c doG >fUZtŪ˫O$2ɩ:X!`8]$Zff9E0EN93X?p2݋:ܽzr3"&2h'ƥ8-FHNr(ӠS' 6r 8M(rnSmEw9lV 6EbD h:M.4iwA9@{5'pE'}šv.l8A ǍΉ2g\Y$rZdZHtHm¬ub~͚{tYI"GIhdHP3; 9G ]tRi XRϐIJSΡa9$Lz6^Ԡa1>"'^qX "]Q:-3_ʮ8&LFaʂ γv.wDz* 2H9 аv=FCs>bf)S?ДTwJ+HXG"\JlB OUcRiǬQFYHp`zqy1tNpEc)%0 a4FDrp`u +=-8} lB+;5 mPkޱ)٫ MeV8e*"*#dAWDhL$xXqayo*6F>4:&cXMYps`K̔&2pq#'Jaoy8f>ih,Uhf<+%$m%KcPo|"sу݄-`c`рmP9  ;߃Dc݉Ǿv'+4 IXc "Hf O BIYv 8K1݅ aURd]ƹ4|WxA%{D*u9;̳hy p6Qht]jn[뤑n)kepv7:ͪfŷ_!Y Nv6:ꅪf5|gq5K{ d>+mevL#48`7fדդH_եn{5bD; NtNYˏbY"s d~J6b:-zz_;/֪ѝ|n>a짓!dov>OH 8r^#R*/R#4F舥C)˓Mw~6}_G>u/^Ϋwo%P0'  ?PZXZTa; D_ü Yq3>?K;(E),>. i?KpMMDŲ?U .tzzz VQvE?#nĨ"1UyͻūE`_"si q4<(꛹]1ea-'w/BM= {l4wic7I`z1Oƣz7}/]̓#hw zm݌u#CM#c%WMblRWnJo"-QҏW_^tSmwE%%/)Sքt*W2체癫@X6V|P! @$&ywo_^p:{߯>{_`]Qu҄F!]+U_5UEךv9j+r ٯR,u`!|L1%iB/wVg]ةKT>8K@ *hA+TN^`GRA}$w3562#{%鱭 5w z#O_XLj$2LJAn&#y\ goB1Db)/u:9{Ǽ۴~`v3e\ٺ\ջ[>RɛĚ)Lu`pJry9>f/ʳȷ_ऄCDPRVݫ_AN9Orx.td3Lnq9S:  c7zq|:2K1>%/y&aѦFv?htYKu+י[l?nx #+}%@8%{IpYoI,Hl+߯zHC0$qf8S]Uz=/d io,pO>鋘ǽʻ{&ʘlR jibJ_5lSd' |oubϽcQe`$>MO6܅Wmf謪.Ɏ.@""yQGim+X vHov H_݆/wNB2Ґ 4r+xkb&0dtLyW>7 ~ſ??6k* 1}JJӠueIULg긪~тUS?_ WW 6NƏ7$_n_Ӊ[f?.z ZMmMB{'b۹^ٕ[+[0R˖q°TqjݴiF}O Za@Ě"& rGEiTg"Eފ}PT[TsZ84e6!(xJEԀ܋`4'Ӈ+Pv^ܾZ; RZ}ƫ[`R&㹋;ɿe״ bNwGgӽM2:ļt᜖6m{ǣKA}.hXBR"!=OϮ-َ3ٔ2e 22k)0 v霂p! h}*o. ?y yarM(crYDM *'gD 8m~FΆ!#B}5~9$CV;'Wއ_yٚ.,3skw=9^TyKOXǘ'a}&9'8Fp39p_* a<+/`22 $v s*Ye#"Йdep]nrr52(Hr`.~m^Y9 *jqWg=>a=) ,_ȲFrh߲ũAuiAq?|ۈ"צ֨FP6 D^dOAe(͠D<r+Z7(u"HSYN!?F墭%":1QKI 3l>< 2LXQ8+ҞhklhP>"@./`]u$I9Ý@R2b\`\D,5i\iޮ&dR I@!l-(Ds!1I53eAmޗ `iվ7pM'ЫCInOL^F%m$joLF7MA@HޛVqwIӍskTdW4yx~'j5Gk6H3edb1*^#͹U'15^*mn'Α3#-$8 cM.aRT{Pvg95c{X5]gl M˺tppAQyuA+MO6705y|Ԯ~Ɠ?N7'%E& !g֒G "\ޮ-QkFj{ `=i6AW5I0c3GЪRfA"gA\21Ekjm۲ֶ6H*AF )e-р%֒Ua!0$ jZJ@H2!C(HxL,( c*Փ1I[#g>lyeƦo?ԈՈuӈ%fNu%G&d 0p^qYikc6@mU#ZJDzHVD$Br3>`0@ @4G- ܇KPFݾM'8guHָd[[֋N/>񴚃a4E)4zPΈV3/L𐵥I*XvŽCчqǶPvp*lц{oƓt٬u(UM_K/Y5fo݈{WƵa.8yނ71jI|oU͍5:kuQ# kݩ"O֒}m9GXd)Ǔi\Λ TIB6:`420h[p}>BPhl5@ Z0c̚Lbp19%u$CF 94"\v[mt,0-t{i+%BcJ0 3Jh$A.mWm \u^YvZ,y %@ofףV^e_U#!z181M3$J2P Nt s"O2MVe)4\j .%EfIczgd$dFm!GbyCrH"pIɄ"*@8&ВlfT^GZZ iFRl×8,W!Թ 3X# 4>.7\zMC̓)oٹly.y+ aQ\5Qw!()o WA$k4k"gɍ"R3ˌNHF)Д}Y' gP|!w`I߄&AOnPt=4m*W j8H{i9$- F;yhagE_Lɉ jk+Y}]ys϶>,%_sD,FB'0 3d0iN@9P::~GkCgŠrPηQ#o/|^UXycKާL^g5T GX۫ _.4M4}^x|9 2!; KΤMQaΕW4P,rʀ";e31.8wj$2/urVLݝkZ|?+4嬌[ʕ !)KdfQf:*[9VߴdJ\k)r@mƠ6@z 2XKT'HH[2}>Waۣjhl~ia݄I} b::#޹6ZGt^%eq'eVH+c(Akv?g-hB"+A$b$2!`҃sKE|yd'H 6|p&pc*3&|6rRDhDG`;q)7}V\TxVj^_:M鿅? _D8?'o-Ri>pZǜV?Suf"U3Vj X5kB׽~zߪ{d}K<C@qLƏ{x.}xUc,3O$1 ~,XV33{S/gWgO?u]]e##eo]ل븣InxSتd;/y(Iٜih-P+J\34sARnF,@J[^Izhk]Z|~@:RV#$3YF"*D"x&)&8:EufqTR[{l}r<\zw& wp~ݕNa#={ Q޳T] w}}R(9Z$0Ƹ# lGѳ +{ʙI✻ !U&T9eFݽ 7.շq',&Y~\ObXZ5  4q0;c$:'-*Wf*jR=ǵe~i-A>Rņx0rFP֗Ɨ\ ~^6ܼO>[Mf~fP~j׆FӲsZ-y`mf+R&@OKȬd(+_TsIDJATTIjxyya9n1 ׹_o}lK1\"h!*U`sb4 $G}ng41&eί.bA$b୤:R]*EϭOB>3Z4ϣzzh_9txa׃X7MoF}>T))1b/8En\:Zj$"$\̼00N'O2YCEY/ P7#TcB8XV[\._)ZcKnQ't*R-y3d2OmȫgOVzqآS,IKQ^4/*Ǝ*?VͬbW_."HZ8 BRp{T(\f8rW({9'7 ?]L) 5H(٤uY:b1J&i.M :g(@,J'$ "F8htT(B qS9 t!7QJ丱(QZͬեą !03e&DfDbR(r*QQ$wAo!R(AVi"b-CHKrñө^~=LɞYIL{quY[׮gq59}[U.)&n }S#_w[,J.e`K-"QP7Iz .sfdh'P&KR@sD " 'z]P!P:;1H*TMNXwv#c9R }PBcQpգ<̌[2"4ksx385&G#vDPHN>CY+%X`@5%Imq.鵦9BYٳs2lr'!(M6`d1ti'Rٍn2k*橠vձ/j¨mzV s2jƘM δڨ4DPbO˲QIz< !#39J x"^"q9 .$G:*[LXwva/3e`<EE}="v,B`Nn. g  =TmMHw"єy(IRQ#) k2J8JGMVQquCzSgBNYl싋0.{\ܶq7p @{W(^T0ːX $AѤ F BGSbc_<GUu;#;P:cYQ4^zG~TRJ&klZ19Bi+TH&MzIAz/Й@gN9 Gc61i ސ8GJ:a_32"& '^XդCu>H]Ou42QǸ@օg+҂F04.՝z˧tfSU s+B53'}hFK/`f;{I`A,ILWTт@չ%0O%n(+"@պ r @8e==$ZH> bxa8&0/M*0l,硐BXO*" |d!Q,U;ZDGNv NIβ*,Xgxc-|:ݱSi|Q zͱgGÎ6 edd2BN=CG%: : zQ %**$TF aФuVVէ@t}:8JC ߘyolH QlEpL7}'q\dU/P?{;Wwh"OЕP@pV9me,*oI)qIxǧW\? ~-~6j83z5nt NH 4W?JhDժlUAf9G 5oTSW'],'NF8=}cY;(oB+EI;-^  #x KaAD^ JgZ&0ǃ@$SBDW&ʌdXGhk1-ź21ug/_ML{^*K-քEv~qe(+yǯEJle4(&fcII"hA="Z^,z`x`@)dCTsK9y()2)hDž7&`1` >P1@C 5A) 'g[ʝx k,PYa`(֝a]p }QݔjLޟwO>Zf=E[5jZf =sq-U@)P[y2HbD-EU0i@:223CO>ZG.Av.kF*okz/-zZlWQ8d2S"E5ONK%@< *øJg+T}d!d*,QDo\hFa-Rt{jF'5ibLDa *y ;Qw^f8M69!zd 8ojȻ_2md4_مPL;^w~iDtkmvi}'{WƑʀlxƒWv&8^"}Z\SG|W=CQH"--qտFx>0ުNt>^Oeçؑ^#ǁ:]uib֕_JP\@u]V8"UJK$Hg?%o"EGnzgk5^ZlR65I=}fUcrSJIcz&v_Sm.nj y6פ,mr).x.t/+y=y={Vi6=+ ~K??}D em"Eb ~ OURnh^7|k"6R/BS:hRqc/sؖ/}}"4OQRIm)|Aa?oj (%Th'흏;e^+,T-U>8=0Mi+&fZPY#,Z|i -7V/>0IUd0AkX;m lVg̨d*05YF{˚auvsvDD}S{5sW<ՃJeSȍ^yn8"A"qyňL,1 K#@o&8z+rX9|Ђ3<,gey0S4  ʐ ךRόH=YRrkG%t}> ү5kk`}Ek[_#o^]oy]\jS[PRۉ.&EY~u& OEIy)(/تۗb:ſ \|崓n$ߘѻIH~Wᗭf -d"#WQR[؊J\v1싂2(PV܂Ke<9  :'en΋*|P@Px@ ` IϖŤ'?9oNX_{oz䷱%X{QY+냉KM-a$EV 1}m𳩾_{XdqgRA՛!ΊJ^^L,c0擗QKà,QE57Jj 3ꄷdy9ᯊָΟF 2@{PaH!U.I KOʽKOꝏK*9>ĥ?ǸtM Eݟ]PҢ,bԌFFhxuE"DP ?y ("f4#|^nڋ=qܓךּװ,Q8|]bJLdwEE/XhuɄ11 !35:`&PoHwD@4:&(9fi%?g`#Ay@4i!~,t}W@Gc Sou?'Ic(3cE 1{)Y,s)HĹ? ^|QjR9$elJ"q qeHB~HnMjZg1#3BZ1jўh%7 c,Q6'|+d}HVīk·'?83k {oM*)JI2nN].+՟7uD?8nWɘ;n(k,+PҥRkt^ J1uGf4 Wh4xuz:+=)% X Ak*,'Wק/0o]wQeHERtU'C]BovZVWHG9aH ʸPȭtYkh1bdΉK8> Kj Jk<TE0T>yS( \HA̓i˵?o@4V-g~~톟۔Znw亘;hnDHKH҂$X/R[TRF9b U`o)69\3YuZ7zB#YM<: o[b3ǷU^XO ;ݿJФ5 g!L{)q$yWv iXDiALx$y0o8JE ,r49Z8`toG3c 0{cL5F >0V:kwe͍0NuH\{b6/bc@ڦD I݉bQ$IQHkbfZ͂PYć/yHoM\Y +2*`AWgqG.%G??r:_'2n0hEFP}P3Yԉ@ګHIt?*s.}HڕTzuѧ1R xoO2iwB<~a[7ti?.Ƭj IMD  9PXe"̒)cZ۸qKq!Dlr*z]@hsdN RyZT92_qs-q<d2}v} Mcv]1q7qܸBEQ>M`\Oi:uzOֱ#Փ}VO?,KR۵%K[2'}j}ҧ}9+~9CeߧIYA:i)W0} 6z$Ca?&hуyfojY19dQ"K(! Qv[uj7 բ7[Q-T||jcoG1);H4W/8?^wo@i=F_F;iނ KḑFz;u+KK:/bڛ͍+mF>?#W4ŏtM[Ԧ8 c75eeX!994Qf@ %K|9`C50c )`@UKgkzvMJ>%]Jto.F\f>w*h٭&Ύg3!ܼs"N9%,3b%0ioWsYH BJ!Qxe`F] qV@v1l%x K\? -#9(C0+/jl_Cq5M).^;dϲFYφȔiOM"h}0N:9oW[Sv׶TaR{Z?~.m4iO{@ 2DmUb13.,h(1B;V%ښ&@KQrAГ% fٓhs)Ps-= |jy?qjXXM3B[ 퀅[ʳo 3~^dz{aٷYֿŀƏǓ/#v"J'J!f.1"]&i(kH6lQCe9^֨ZbCxvj tXM1iѭs15kh97`Rfȱq+'RaPRhtdd)H (jhyY!l/-u]>.4ÞY ԗ8nwvn0eY2nMl̿K67_Zg7%:dvfk"~(&.6J}Yƫ%&$KV%#4\[]Qύ';@YĨj6 >DAmXe K2$FّΥ)Sҫ FCMmÖ1^Svm-XOeFͷ6GWl8P#QVDŽ#Mn1ї ̧$l!E(, ʰqFAmLNWIz]i o 2n;@φF5JJuJR*dtK,u?U GP J%ѺIr&ba(IvTS*t<' F9T2Z[SxǓ 霹,тc\+52ɳz50~V9xEr#fܻ~*L{m,o~zGz2\ˏtg0樬l΢|!ց V8pnTlUE) #j34sT6𠸈,1SR2CIN ν\ۧ'ƾlZ;gv얩]x Q$~J"]s h'Z0P:Ϸ@{]t~"pg7Ⱦ0+OnY5榀e,ݘm7OCÿ綖`6uF p˳iWp. O_RЏLTـvK/Ef4KQ&pX4'X8H~7>bgJYc)bZEDʜt!+W3@:RV\zg@׆<1B*ed,sisc6Ҙ3KZhcX bi@*S躏d+ 89T`FI38b}Bv~62Rk7;{}jѳfw5C!Cۄz)m].#w_ewOA0WQxiPI0ΔZnT& '|FtM?]-k2+iV2*FA۫ȴ !ЋmmL h9-c\6M .F(ܱ?6L?ˈ"~#hSrٗw\l-ٵ=.YPCv_?_LӗyFn)F.}g׻nd rSfO`63k̾W!a=3:h_m)Mߔ `Z;WR;mʾ7fϚ fOݒ}3%}}57>6T*YWnhrS[iyX[[)ҶՖ׿!]_\LqQ|v z>o4 cĝ0ڽ@ѯWijQc3_I -~DlUH ,"˫?VQoy^jb o\v^iH^,R#~m;(/_]~nW2ن3t Ec_yQ'6Z$O- =G_r9./74;u?nQG֞^w4z9$o=k&.5mY6ݰ\2dYrCh9}/g@PZ'ǓlEp(9)فe*Jx`:jŖ=]fRI p=nEr =/h* NFcpn-Ԥˣ/ѱcr'N]/Q1=8u@^؁tV ZT^1 '"ZLׂK2Q O1$@r:;Oů.RCȭx_9x~ٷmPJeh{5h ]Ҡ[tNEH1Tji W3ѯn󾲝ďK/jO.VeIQC'[ {qޥCRVuH,5#e g Icଶq. ( hP`HueKrL"qMIH)O2<21h 33QPҩK RR!͍`_P&wd8Il.#A:YO9TNq1U6M"Pe[z^UQ[IǀJ)g4SEA#i<%$4Xe$W)INǎHFgUC׾ VvnCѹ(JzbCycv'`X@$(HЕ`,|2<P]cb((="ZAkdMGTZk]u DZz.hCL$]T'  |2(2?fvWEqo{rŢ"㏫S`Ce//9OiX6revp<*}]*b`|iiX6ۜkiz!e~٫9|mBb!gBZTi5 ́8Hg}-@*9CI@" J Jˀ$! ա@^{:Ͼ}, 2"ģe^-WA(.e@ >z"1[Tr!(\Xύ$!JʘT&H@xF<劅>0GxkkCqpy7=_q`zS(֋SKࣃ191%f:9yj8(U>B5KU* SZlh &H R??U?GhbL\DI$[IuT[8}Jg6h^wEQd#R3׍|(rZ~8С>?.qh-g h{T?ɵnFqˏ? Y,ãzR&8Az& ǞUrԥgBҝ9~,:)۫寮 ?b*h]mk4tz*k?Ճ<f!xm:aefvuVLU~]f7Dͮef4 (Ǣ67POhyͅӓތEFOhGq׫;nxۯ63.٧Gm95ST i TRKKp 9 D]_ߦ W_ c4Rdn#AX $޻S&86)-Ej#RIvٳ9 D451b۔sD Ds"ork12{n;;6c*9,h)p#8TŸZ#88-+jkٮ*Y\EQ.eM׏uYV59ydJ5ؖosY?IE>rkUIUنL23}y0%*o* ( ZD>Qo`^ s/3$h'P&'J-,.24=L*$X1$R*&4Rm[ֺCְJ{ebڒ}mn|&#~Q$xIL 0"&$䣰99_  A돽00!+­-voZBٟ#=~:zL s <9 B]Ȝ$@]Q+'P⽤>_F"ڮkD +rG1c0 *gn]nuro67z||kT*%UP:)} Jt(rj*9T~w*i!Y*Ha6qydc׭ ɱ%w*K HG68iJF 䪶<$"a"mr1^gա 9^oI^8L@j z.&@Qg0Z>_:}c s12KZ.Q^B PyAzti?ޗL*n"Qf}yn5P!VdEI0"L HD3&Ÿ $b&QIp)vkU|rޕ}Vxq0-+O7>"tLIj# {I$r3"E>Hc$DA$2 zIJI\R'J3^%)ځ oʕ!J.`oEuiQX;깕um$^0ITރZfS+:fRiˬ. {>zTÈn;ijIiIh4-t@qD<;Z,yXΙ"yFsVtԮ"sy pq 'S"U$ @-j֟o`B`?.*gȲ:>{S37r;SOtiKbƯ2k,=͜}qigc,|` ձ-Mj/'G$k3 Hc G('䚜\(9ĎN4W'oZ|αN kR:Ưϻpz.γ$wRa>JV^ӏ񮸶*+tvv:>]d@ α}'~x5'*31:q0*XyE}{v1⯕ʻ›lMg1̕~[nc$*7o>8x 4$McK ַt knf=*maep(U|8zhw՛lz9rlnuM6V,u7LGroÑϏbl6N -ڇC+sCT=7 kw_?<yB//CY8FU7.UeĦW(ׯԗ ߻D% ~7wo~xN>o߿{_p^&JM]]QX2Mϻ7Y\M|{6_ݿoWkP>䍘܎4j*!|?䝸\*xV1=l $lή4OȊL0\ԳDg"D u9dG0.e>kHF JIf3YF"Gx\KkL@`mpZsMV6uqrs},Z=)Pn"*ؿˠ9KH5_z%c6/֛1z x1ް}7j}8HZ!g/"XͳR h~C9]SO;W&Y ,GaPou\ߍ؃# [%=ϰ<NjN%hr'6DB-ME Θ hLj#mTg=}T9e#Cբ%P&Z"R֧&E+*EEq@G^%cNօM$W5lOƧ\| D *9c5.Dʞ[+;u9k9]GD}W7@&ʃZ]/ 2)ZR3%Q3\hУAcHJ}}!r~kE+P`}i%ew0!c1.b [f]/eOc5BȂA1qwkG HLsXRdF9+:k,,0MX;" p` cT"'80)8fu` ,Qk12M:arLVj ކ!"Z!(.̑f+xuf^ߐ@$T( ^i;+a6tYոﮨ eWK rT[kIk i"Z8(V=MȲZ0oiXEZEhVp4d36pSXv3RyǬ䤁1&(2zA BLX&DcY;4]xc#Pzt,+&{@:J4f0؇Pr7z/XP p΀Ԓ4D&Lbya>xmBVV8 g8o>tSA^BB$>yL_V 5ȤuK 8 V"Y ԏD?jbYQp 6Y xY%'O7VESrSRdI"Y2c%3kVoHYG!]Jr s@ %|̡Z6@ @R{XRkO hgMIƎn<L>^Y%khbF,Xn5;ҷl,8> 3pMȒ\\"(N 0SofG12~*j880"*p(yX0°l" wUϲM Mtbц,Кϭ a1ViיtV4IxFDh3[6 jPjJ1MZ~VDE,6k,e4]n$,{rh>g9m`"9K˩ FA;wFfnkZֽ113IVcn=fNN"i`$4603 LS0v-,:VLk"ZSR5J I"e0%I 334lD]aFlI"ျaQ^" %\Xd.O+w-H]%j5!S :( ,34tz DeG`}}aؑET(n7lE^QH"NRe4in`nObY p0) 8Fj,xHyatt݃B =e*bAgp `NmUƚ2 +7 3  b $OAr;(H D DڅµuZ"݃ 4-m/l”X[;k* X[0m@Ǯ TFX6Z4a䋖@+Bbz)3"`p(r-muTR&\Lۥ!X`@hOV.#Ɉ!:͇.P,;)` %,X㉏Tf ];S9Gkzԋ cwywI{QEv =-*fအvO{˽OgH2UgϮ!%xP;A]qT͢KTz~XHG as]^tthD::?*꼾c/_}F6EvM-iL(ًf4Ns?R+Fv+FZ1>h߃ly1ZSG˵e&_YR\(Ͱ>1ɬ-V*UK葀J(F( ͜3%w~hGIhF{\*]_,<%fg x% 6v4&쉡wYW7ӘޱtF5mWuҺ]\%mLJQ_0QcNzpxazzĠלhgsW.pb{lz?Ӄ#߻?\G{\*yz!폞xr}濂@Ľ>ҟ5>59=ͮ/U_ǔE܆J7R0HCct 1T:JP*Cct 1T:JP*Cct 1T:JP*Cct 1T:JP*Cct 1T:JP*Cct 1T:JǏa*_asnT:,~*ҋ#t8ұ~wwvrY{!|/L+i3.,?frSw˛oVS$ڭݯ/4jc ?(i^hWr_/YrV&Q"wgM6rOG…^UsFI7s(ΉxSRzJb%ab%q3" 6ǫu_Z>A{L|ѷॏ#__ri"OUd{ ^5m1km'PBp+ڌ(}y#0  3-PHsAI`Qc&'9c+!k$Mebv =v 9Vg>HnP_xãMWV֚vM-~^_/(ʁ'C )"M42͖@TIz.N$Rr8~ɞFx7/gD[!25;cF(ƀ"\Q9.P\iMT2qDQ#t< ]r_^nC73YxS>bbtrw&OIu5ít6ǢYB5{. 2@Qof̧ ܋od'#]b@;VzoH%A*!)%m:[נXN%⛶J.F0>ׯ_9_gQ[ùo/nz">;o?%.&UV:od-Whg_\LUWt)3'"c3͐ڼj m4{9|pyTX$[ķʿW_>mqTܯ^SjKo._fg*(Ef5O%?qs|L[?OsVWLVW\{_{b`׮ d*nF䪸Mߜwi7dw/ק.+lr'=zrY?}2]Sy=ͭ7`r`t o9hxweߎ-ʈ͎  BB'AMSI~)n>Q1]fqwG]Z|@9>N:9ѫź 87;>FO5V6m )XٚyaBb{<,Xr6 `G6aG5h՝<zv ݅lҮξY՝nOrݍ |u]O#җC|VGو;c;5gζsh.;k6_B UHRJ5>5| A*6໒[ɩqqyZBF*:&<ŌW.Ұ* +U YyFx/(VglZVTVFԛ7W_fnmw[}ro-঵}qᓏUqWďwGt?iHF=ьG8z$G8z$G8z$G8z$G8z$G8z$G8z$G8z$G8z$G8z$G8z$G#ћ#H1|7=H\5z$=AlSbO}x}nsݡ=bˮE!-/@,ĈjE{{{#?{㶍\[la7fۢIEQ3S?h~ϑ_3jNiƒŏsq#衎) 68̩ D^R]NiIWX;qOɅ;{81ּ(v?m(yOp f.ڮgu;wq_Q=k3ǖ~hvMۻL)__qbrDuKQQQQQQQQQQQQQQQQQQQQQQQ\S6)w=u+EkV+1Q~uA #GW'4YTIq=;8ˋ")L`\4yz5ֲga} :H}p͎/^y12OL'yp 4&[e' *;h&Gm 5Bjz[wzZ]j):~8ED_]or:ڨ:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:験:ISBŽXԇYB>|\'dp"Y75nkmJtY7:ʺQM1/ݔ*q0Q*yu-wުS4sK x^G2a-y,-yl-y4-7`5AT&Kmfg4sVk)/<<u -16>,$-O˽JwV<es 1#+Ï ى۷OQ} $U;-5%-}0H PZicvxˮ9j8jfsO0)-}9X+7#Y \4 쨛Th}B`3)D)SsiN`]{׳r>s/q?Z/M/iLb_` 66?50WZu@md_䋊,sm rγT8R"\{&>ea?̫]nF[}V Qx!mzW,U61Lv~[,@E Olaz`0SLseЮ>E(dt/^^;$Zn[͵>ZK'yj79,ubY~*ĥz`ͼtw7yV{5X}1ٕKr-d4TЉ}wdJiJYBI0' {0kQ$-iM$gp'$s"Zj!J1hZDWYBۓ{p6m+@ E P Dj;]-utz0#嶺2INKW;D X;6t"]+fh +AZCWt% X5ޕؼM+@ɵtutép*ezgNS,{8);rkTJipӭ[p:Q%oM.Smqm]5D)mtUӌ)Ѧ׌XBAthy"Β$ygmNF] ҵM+ꌖСb+0Mƹ~QC,:u7=ޅiEh7n}7oʥ9wd?W") jTJ(r?ՓOZPh W~>/r I/zhאW~楽Dy6 'bOvo!Dbh6^7/Ha|M$t[W)Y fLoXbe ϩbSS^lW,>p5XYWםA>W ^Ey'{2˯Np[̾tr/Qa#ʺ]?9^"VhJ?^r%Ӽ7 ^xt?Exy[V0.vл\( e4^UǸfV!)pe|QrJB(HA}. .Ta=c\(fAS\M]fj~8-=*0%/Іiu ([)0"x04w6ܮ4(00P&a߬O=X1K΃:#4Ze*(wI3˥$>>Yh'1 ?Դ@{y\FR49 <u"gL@<#$˜>̼ h0\d"`,03^dZ: HyK漠F ,SF^^AaRAo|u0wg9W5V7·?$ySLFɒK+ 0UϿ@ G׋m%JUUboP]N| W@?w~wO|Gyo޽)̃_AL B7|P|Q?}ѕfECy[m1k=kUz0(_UN${/b /7 /뭻wG-1Rn >hI0"w%DE7W0rE=+8psR t {a.i9|~k F;/~ZO-Pe:]Vw[̞ /u(0$Nh5%`1 xlPE !'^ܹ<3`Q^dzs.'a G2l~[[9N!vy.G~J(oڕFW4Oz1ZVI7y1~R =24ݹ2CȃQǐ] Jy=&΢xe >end\OLwGV D=[1>hן{$[KPYgR?@5,k:KmW@U2.]A9$SM\LAȎrb꺗3y[qp6vlN/-5ޏ<.n9j;̈2E4me8e4mZi缤[ i}MڧPOn:h7RODKl A-g折^¨BAt6)Z(׷yWT}ujYܖ͠U$0t:ImhΦ`qC췭mv}菮7Y7\߹ L9Xk¤~^r wA~U~VWb pdƽe]o5ϝڅۺZ򖉶,7OF̕ ^<%BKL`"uLK@q{!zݙdePy]\s'0\ja9W* qo4O -˅M/RgHBcQ l\6oI~ZZSdˇS=JԃQMgfWR*$YVʛl3p!k sֆ,7Om̢%8--> |y18(jo5%mw]vfgl lμ+'M9ӼO>zByg.cZ+߹ޅ !5(Mn)l&e4x\DrV&n#ӁY28[:xL dbHh~>9세6l;?FVg!6bv2oyn|Uf-$$mrN E<'Ax`DPhU,C8+8 WѠNȹFt!MOܗXUU^ټY`OY^x}hW=WFF5˗,ڃf٥m5.7l贙W6+QE$#ƀ pl"UP+ߧB JQ(wBx`Q9 ,!7=DUG"]\F].5)2-'e>K9@"d.3L\Q:T^/te:ZM-ʵ}@o,o-hKtvV89KY@b2 '0\Y3Fh#.̠(!\ Vk˫XGyHi!AQia6T^ټPӧ-MƟܤ.ί%,<1,b٠5MqwwZqY4'gmrQz.{z*t1O·Gʛ-j\c2Ux<`zi.P4z\&ɑgV9h)J.zR$ ;\1S\Kǹݽ\POFjlFz\Vb½bDfX֜90|z5 ݩÃ?oиpex:rN7*C >\$eCL%ɲ"7&{ך !eϑ,'() hBL&4aL:Y0]-q6#⒊y(]mvXGٸy :g0Z9Aks UP;;K5q6/kefX+Xeu,q|?v ˇ;U7o;.ِB?&BF\t5+n;7*]${7IiClC0y~}?wKk$\|Zmd4_~i0/,%eۀޭtmR~n@qp5\Jiؑ~ݿzRP+x,.|ͰaMkR@ hcta7W~ R5V!Y4co}Y}\^.ތUrkسCK`_6n.HLds-I][yz/A;nrr (1B.J+Q9oH} dcp6֘#ɑOAla6mvXGh{ Y\A꽕ԺN{BoKbh45r&@nY0d]SN,d$FZKwLv'?i]ںlbtEm9Q ,I n9zV/,:R{u"Lk`c&ljlc [9$7n=;z!v Nנi&$ Q"$ M%QBN$uϥ4mxVxh_]tcu;/'z/x2e5d A9t%){E`YNWOusWip;'{|ڮMY6n޾zكn\+t>Y9wNߑ/VSZ_Z7뻧M7tzfԯͶCjPE]7zi.kZw+H7ϼ͜*Eu/&yjS~xj.tsu) _6ZCu2 XIjnn29r+z,aw/:34`9`J5IG ϑwNfdQ(!0i%eVA+2 QȂ$4v Of< PC]mML; [RgL|<-ڭg˫nGNm8fyRI+|٠ B(mKJ !eBd% :Ja/Pz`x>` ݆(켍ABR9 XBMr4ZK9Z/AD,pY|rő=WWz9%&N?V%*-K3m #ol* )袙Ӛ%5 Y/񲀻F+qō 4($gJ -7*KoE1] I[sHIeCDhQ0xߩov,q ._qp?o6o Ѳ ;)m1z"\կmQRn0(wΞ%WޯބIX%7\\¯FӋIx45O\,$$bGKe~]ru"Z0ߛ.~Wf(ݤmbbpۋQAADޥgEE4\-mq bэ"hύы3;·wC4^H]+!W2\m\e'H'8|.>$ut [pڣu7l!yۧ}7&qor뚭p+Ȼ% `(dK*ʤxTi[}U ǚp_p߻ws3y@i}R~~/tx߳U^a+}&nz_IL![m?7yK~a_30Su'{S5]r78)U7eO ֯{Ybr#| |:&]$SQj˸E;u*K}%j9>uV.hrd=,!C1YxX+5WJeh;Iya(eg59(| ަ͠:Wf`ȟi]?V5mWLlkrٽx6c (; 8zM]{RBтq`wSP ~(O%4o&NtыKWb}5S"0MZ~4z"}ﰂ6?CY{t:M&tR!E"S54Y1cva3PT@.PH<\IIT2lA%m&Ge䑔JK`EkN`[Y\ &ԙ{!UtI}`V;p`)j+MS˯ rvvЋxzj,Lu}^z8[nye^81YY+ i\ҤHT(u# 8hX^{R5j$箑lF2-Q%32{&KS0J% 9pyx4R).T.Ht)/Y1PA=GK) F&sjlgwQć|"oٿPY7?{֍ ߦH4!Ljeyu`qMQ(voŋhPJlptg>Vy}vkif^ۡAZJwS <"MLȋ< xO^,!XO)$C 0T $E 6N//1o6v~lKrA/›O~ӿy:7݌9lVE;wg!8$(!1aIV]b2V.S;o#]c]߽k6Mkny:W ;(NB~<ơ9#h񼅐 S]BBj=9}~/as߽BtZJ8OX(QHXqIh5Tkmtk H, NzCy|/)IQRΈHጂQ MIƀYw>\Ϫp~qm&^ɘMOX[>-[Ml W(tN$x}C)4"!j@%,SI6a#i򵐳TJ$ gIcZlSV; 49/K"AcYi֝| AIR+¨- YBAALg(G-'賑 ltJq) ͌`BnB!r֌FM$"*W?1'ѕW9bt!zU{U0) 1H2=*KZV>9 蕫ђFbLA".RE=ejsګv锰٥sڪMI1zl}08s`@`́9p08s`dm0k4;IQ4D? OC4D? OC4D? OCSs8~\bs~sDouŴD-Uڕ]1tJ䋇 @#O}/$a)0`%fJja%3=7#z)#ѣzɼ2Gd$2CFP ?>Ӆ4_agPϬS}_ؿt[v@ZۡB::_pruҵVٮ4ݢǢ}oIf 8sqPz]/zf١LT,m$I`PhҙJYUY!YS1IBkfٮ{Rȯh8}FV#.4K('A'7&aH qH9B[EN06Ոpl)I#Ꙙ"!dt%I+p:8 "Ywv:<Ց[1:}j mc@hDzN6 @cQJXVQ0K(xx,WPV^a5:"_vF1w78k?6X ^6n9s.cW\ԇiyn̙Ew:`\E}0E88B#w׈rR:*ҠPN1K270ɔVd=~6b@O}00!cnyҖ{Ճy)X~B-WOhDNL]4IJ 2m()mB]k|~}%Lp͗/#Wf ~3aYzo3qWcCE-f/Bg4`2^5]:`\a% L\w0*r>D٩>UգE [lt=BތaQ#/C>.QfMyͩ!}nuS,KqkiTHBgmYlu<ƨJY(H5џNū Ifx7!::M6EOSu&yWfŠmy& 0IyWY^(p$ҬU*"vHF VH!pэk(N`,F sPLՎJ("A`}D%)_yԕ6=MveDɨ@ KrWOi&X|]O-_ggeg\PT>?M5;]kJ`=`OX6^zZy3?4L~̣6(<(^'inލA.OSZ56Y9N$2OH-'qY08z2bx~'[<[VD< #ϱ0kr.WM_yGN?+(OjVc 4 (% Yl)f}NT&l1P 㣝;CjXy\ݟ_&!]\ORc# nh,OGW\nyGH 'o Z{_7t } /y^?5:\6iVXpF{Ev@Zτ=#4edU'&M36TRkS'2`:]P P[^=:^}i&ۢk2Wv:0\25 N1c5XZ#&^/nLty_@RduB֗4vk>"Gjb^s`?jB䧎ī5%#Op+&/9| y :p_>%*dPwՁ4 PXQDp3KwD MF\b NGRpγj*:PjI22"J6ڐ{ XMx:+^l[^»WNn>*\u#N?֒Kw\Z\=tU-y4 d%)nvv~SMW-1W;|pksF J_61_TI5ɤf6er [ MjHɱs@R. $NUэe6{z룏ݴy1Sm[_*Mj&GSwnܒҕ9|1_|-sf);Vyן}3u=Ɣb)c) )ˠZg#7:I<96(!ք3iկSR-#8#,WB DUW!Jo/(Ӗ(LOտxq>q[AtbѨhAICj癕5k^0 BJD-*8ָR.98%aMwf0d0 0Aw\I [;j75um _f uLrRvXnr_epr':2xj/\Ti=6q޸JV;.TJ,E`6[<{ynqN!'0wr Q,1, 囆jgޚB&O_5 aLpܿt/&GOfDDK^ՄhTkx *05祐2s4}6-<}h }IT'yekEd/q|W'ݿX=sKp&.0BL1Flxm.]oEEƥqZeU1G;g>Z~p/yo%53.Ah(fo!n+OѭS.rgy]9ٍe]x=]\Qo2=ä?.֊EdS6iJpu@wy/V [&<LJ<|:habPT*_z1xDg)-*щ/ͫߵf~ MNMN>9HwS6wJȪ+t:_~\x/[NZ3ŽZhzV.}8W :xwqȻ~y@lEO@=<` 5a}On\\t09kS%_K_ +`b1 0XjgrϯW'X*f"C{XU$N'~֗V2JSWjVwnl߽|s씷[ko$aA`F08Vԝ;L^W9XW\zY8_!sg!6Y[#O+*dej*xR MJ*Z\;Wb 3++We KL*,w+ݤ|9-`4gW9r]qӻL="6-U<矗)]U~u1!CS*VVT%je Ac֫9C9l9lt9lDm 9m&5ڈ끦׊V[}yRk])˦*A1Du{لAsʲOO2GC{Yx{S-8¨B攩=Ok *)+5oxO'U"~MX dɧo9|)k.Nz/6}Hunx_?g ~",Ƿ嘭#-)'?^̮.@+-Dy9a$Ymru]6,RpK(1 '2mQ6/Kaے tkА\8;b;,W7_'U7*&@&郙TlT;X~T.K#kԤ-_Իl*adE@ES GlYڂ%Yu꞉Mu`k]ڤ*(%f,Jj0BT^[/|gi FXB=#X\W[xu<Kou6ch#x~qmQV/a^TZVYU2cx)*RuU4gv>v9ֆ99چ`[IT :͜5+N|pZ.jZ~s))\V*xHYFmUh` .#c\ZÿL+Mxw4VorxB {&=Upj8~k~ fg ӇkLBQK(XdP"D-C*74%./!\`e*\Z}%d\WBp)xBjwB\+TP1cĕ :?!\`ҙ \nR&v\J3F+%ߧ kUXWq*7De\W01V.'mIv{}Hqԟ,Q-LC4LyuEڽ]Km,/srBmby'p΋bʱc=V>L>ݝR?~G/\],?y^CK'=:Nj ȺƖB\wθzj3 ! HW S @`:v\֮2+NV2!\q,urMWDo]JKTq%(5<%\`Kt2B+TPW#ĕ*$dprM% TR3F+ELj  Pթ Ԫ׮Pʸ#4cʦd]`ә \uj Uv5N\qkl24LL=n#c!chPIt+%%TS/s{9zR:YL=oɤ֓Ԟ1֓*L5ovhd @6ʕ8MZcɁJMgrӔ@$\\S5,v\2ٱ)VSwи&wzRԪpMitɸzjS) I.#<\ZMbͺʸ: P1d$R P.ש MW#+M2B+PWRJHIW(dpr5 ;Pɓ1JJDJBV'+PPŅ&jR&A,8KW(WTpjv\J3ƈ+-ԁ[7a6AA0_ r6? &WpWRqO'Bt0mEpp^Rs`N+0wx> ͘]۞h ofW?]\+ +=?wYk˓qzޝ/$QzzD 䮇S._Ԏߗ-= z|[{'ESC1H3|/q#J& ;&w@=$٩rNj,>-^/ɱ3\xR~l:>_wFSaF8q[8U)"TIp;Bhŷƪth1O&Xߏru2IF* Kq#5[:JjSBLru2LU4ƙДpu: O($ j-Q U2q>L!ǏlS޴(W/hoWѸYぞmdvǣ/M;bF\|~VJ|v36p8/|4d=^y߻pǃv3HܫCkV BHǯ'u㇟'Gy"y21O3p@_5ͧ;wg|d0A0C_9n۱:~M[\߸j ?ܼN,ǖ =dЛ`k"]M֜crٲ.LC\tǕn$ϜE|]} V_~@m痵KlzE;Z*DT,̙tRwtͽ뇚!prNfUktNZ )jJ[X^UNKa5FV/ ]\juuU*bMԱz`͔L*k5V ڜ:Lu1he9RkP#mblsM4FߵUS)Rf OEuO 䲽މJƲ老VUө&yiR=FIE>#P01ֈfhN#bziW\Q-^rJ}< D{Duh3h;6{ǂS[De9w FaN9\lK>9h3SQ~B#{F)NutAڳZnB BQw_Kͺ#oG0Lsڨ/3>*)al-s`9:["@EE{VZr[ڎmVY @rX%\GWb$XtytXBs+){:+ Ņu ձ3ƺzgzEKpxٶPUW>Ik1Ȇع9TMAPڑkj %*Ze@ԓ)e , R -\|*U!\ "U;0VjrȜ̆2!I6`88'X( Iw4 ~+*UBf$Xmd*zb슖4g]r3J@ e :o)Q9Z ȄA;uvQEgJ7ukƃ A'b΂Gx nQ!6Ks7)9_J \p}BNPfdc !\x+B4ޮ2ݙI)(J892r$XT54:k:AQ 2#}@PSQzci_]qF*^I"kh 1ilDbrT4 u%V#52) ʚ9&dYk8"2Dbpn iD\P>V|`4-dh .aohX xyPn}m!Vw{Kl_Z~eu\6c#ywտmov8Km ,6y9 ﴺgmnM>oIVg{E%"VomܷroNCǨ k]3Ϝ0ˮ%"}s 㜆8$b W@K' =GUP!>H1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%1\C^jy:zK1\ FjZ Wpe $p%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J Wb75\6jI+&] h=yP(hrj1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%okr̽$ؑc/pNp5J WpEcM WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\J WbÕp%+1\jV?~r޻}ÝoZ9VeWxm //ZN//a =T fAtNW7K6]zB1ߝ[ipw~~k-8EWOCiѕy]/z> hbjh], p72g@I^B\7cܭ&]ּuW۱:갵wcCMw/ܬku]o~~q#IapnvrYw`| בbYRD3N~$%Q$K2mcN0l6~XM^n)ߍ]_Pm/N*wѹUq% _3@陟7or2+"S^ B~\pRkTy|rj٨5KZ{ʋ a BX>}D{O}ۛMeQ5ѯЬ>}/*Ƿ[1DN_nhJoHpkhB7$(zzRlV=߼n9g>W5ҕX "GJpek"F*$]CWtϡ(mФNNW'6CNDWgQD+ձCu yi]%R\BWV"qtP⎮^#]Q$%-B+o ]%쵫^!]tDk]%']WȶUBܹO3_%]q$-Et Jh>wJ(k vGWƪU`|@ָ Ȏ^!]I̹jvPW=dUaL?wz"mͧ$* ʠOa}7Oia\x:5˲l:w4Oɠ9P41#(6cFL36;?M;s>EA0r|2-c1OJ|zjډ36YJ@,@2Fϴ,sJÃ& r:8&ɹeS,`3l:(t#V 3Kghz>c)5ݚVV=v\KP[츄sJ;;q``V=!S m+@=wJ(;/WDWlϡg3ȟ9֮KN|ah;8 幝w+ձCƤMt@oSt*T3dWHWq*>-tО*0/a]3[DW WH;]%Xvt QE4o]`"JpO}ah֪wt t+FP*n*UѮZ}t(9R]B38 \GJpuk\VPҎ^#]8wxgwjǵA s<>9@Y $G^On÷Qy"K1T:\ # 0||87z~WnGAၱiT p'ՆI1I|~5j7X=kX].EF,_~⋉$˷<|*4o+RT ; QE76Jz%+]{TG8?nfmeow~(z&&7fep+a,1FP&J \!4pʃ'%hI a>aKk1w)c5gL^N4e(I+~1F: l0NojRz\Tkn+Icƥ??65OHGgqAԪKy)ְwB2Ӫ^6_xfP!D^O+YtN ΐaܧ%N)֔-ٌ#x--[n51 n;Эde- >͸JKY c? AXB6؂u)PgѼ|-fñY؛ŵe"R8ߌFGAyRbݴ7$σ0͡\b>*E4trs?WΡ/xy\uv'gLbl=S7޷5[(y޼-yԘ-[ x~]Kym1ykջs),YJCMTldjz:Щ V{F=$*8ʝ7\ 1qV{1gz4‚Swzh\?Bl_֞yo&u4w})^$~v\4Z+|FSQWМo׃+р'+YI07ao%L :n~6a,Aڋ<"*cА&#mS&TX1Q!цhbN#)5P(r`K9V[C) W  AHRVa S띱Vc&yxuĠQbhnD4_ Q"|gw3p̂V?%dz[K!26{T.,u&a>Ru _d:/5e~VZ>qWaeSUZT~0ோI3(]?һT,;Ti4{JfP2ujwuW.G͵a6oqwj>2B6= 2/c^,:+,np2*yl䮚5sS?Uvn5B]`".0^,ؚxfaB5yjVL[{[M^:D9-'l3#6oRGchL&dsJZQgRK].ef1Rd4 $\6At|DTHp'8 cF5A94p""N2sPPXӨg0bQG"`"RSFDD b FрG!eLDaDcpL̬qt6rW,JPǹ(-ބ@8/ڧWKx.y3?jX7}<);Jx\!<ܒ(58'<S<0 !4%H22)Y!r%1tpbT;NJ@'@A[@ȵJ8.##,YydcNdhpB`Y gPh64L #z{N ψ49hn'x)U% %qxܟQrWV/Ѩo0 {&_xgʵJg"uX&vV AIU82aA14C<|;c+t1s,t$E )PwpZ+ $<=Ghh@Lj)OΙ1O#̓uFI? (q\9?%A"ɩIaL"0S6URO&X!fʱM-u]2g?P2-}"g KMxUU*+/4-Ei=cmSF:gz{sjv;ƫ J(/t L]U{>J P}{Ϊ-37sG`n^cq~3K@~ԽQ_.I˽o@RJTc- jqyuxЪ+aRem.{3NSh̻3:!_b4 ]퉷)_Kw4+{(QtOߍk/0bAU Ήzo-"R5z`Rp'Mzy')7BzQ;_x ̗3vSʏ+>AxUBb[RkyZ_>.`qyRzAEH`&vRSl%w3ӯ)Uǎt_t5*aUoJZCE_EA>43'c<#W^qK<_{ʄ p}̘RwtLj!Tf=$ 3 { O|>+ڙ3s>k ):Z 6aHf2ʴ:R18Rj蒢Og.)Ze,/:jgUVZr ޺'W`>A<͕<3#0h ^rÜQf YcI"!t>R+>]Yʌ ZbI4q$Nk2R䃓ʢ4affS1G Em:UGEdQ0A[MX+/B"vA6q> }) [Ayў@͋`H#ѢS`&[sre[NQNh%?SJŻS_)J}Ԏ oaFgp1Y,WNIƨlڞy1NyDN?f,PŬH$42$wL9G` ]tRicI=CF-Pa䐤2iHMLc1>"'^q nm I/3{cHj+cB2tRP%q{6bֲ򒩡FKdrZ[Av=FCs>bf)Sఅ9%Í$\.%6!gJ㸑2}2,^!d`b Gz4R4bϡs4҈Ҍ6 =bwUW:Yˤ cpE4LR{ ̠z(ًe9w,y쒤o\R &;u%^r2EO$h3/-*$,fv]?MBB ˂ckD \ RI[u2ΖCl^ݢrc~a5%C0,S;i g 輴QU)Z:93$2R(BAvHN)ՐmC־rbڪ+v/{|;i Q :d iy=D?(C?U~d!Ix ɋȖ1C  :N$+C6YdO+ڥЍM C@K|i'=yLOty䪿*EYl.!s T%dP Ƭ NΧfF;Go$3MoHxe`}L%; E6`lž>?wlݥM7h!5bIl:] D, ?|[REafQ],;>m!7oxrBW=׵F̍{S˒N-8?M[(k(u6S^v %R֨ѧr >Y;F*9)Vnɀ3],:^eⳛw'>{#ZEP&(Z ),'QeW @Ul@kCiӥ6Bs&4gm ^tYGgNc5NS_1HeׅbѤR( *@ʺq4Ã#Ηj٭!iV%ulF'[wF1}Uh tFչIoG4?{4;c/>̿x}@sχ2>|w0ʟ Αxۇoys4X5+GB}HGj0 拓^?:NUx21n_=9)nu1UZuZ괐:fi+˨Ƌ0-QYގ~Ip:KYT|Kf&:YS'ޚQ_שҷM0k6(Oӛ}n__^/U{H0$A\P'?C;l54Z9^r+qr9n:ʽ qm݀.j|oṿ~[h^6fb#EVPt䠫ZQTZ:$kq~V' ;HOlX(?w㍧jEoE/evT ǝ L!AY$9 =&{:;pOlp tۆS@yבk}>eeX5l'lGi h0->Li3|κ1<*N); t>PSk9$!lV+``K&ARU vz1@“R$ߥ}6eLlMYJ!xU@eoBˈWMƹFu}:ϫ '^2do4kɽ^]W=ٳd-7"yO\·%]k&B{)Db; -!%#<:dVظM@*P UqhR6DQTXfT's~oDc6;'zM( 2ْU]@]\J6k 1- d$`+4mۇ5twICREZWtuˤiWH\>AgvfΖ11 btK̼y+ϳ0OS)BdHWtB@Eb#O;ϡ/'y8'E+1a-THEfD L)|[&EaR㜈'iw].8r>ιPdikOXHR򑔣 !cBXL;!F(3޶]j:7U\%Ě VJ ,T%кR+rK8+㳲*oy`=Vf(o+r[TQ`&b9]o\ɵ湴fjP>Rrh ^dtg/]3'ϴ?-"ڊ,,al"y5V_v;Tw q'_dY,E?HEsuZmDP9Pkk=[@dtNؠbN!F- M[Fi[ȹVC4v$(?)%p,dAg('OVReϢwRXXVP nP|"]\9 F+V188kK9U3;e+Nx8 }ȓ/ԌO6fJZd3m~铎W˲jV;3Oj%\Zkhss)T+Z(`2NB@*|1 :9l.hK-N ;׀v)H AgSǢUE|ɚ  j*lFND<XO^u^a \Yns1埧nt]RT4mE$e9QgAd xJ&VbQ&化Ȑk_tfAR YF"5rRM1u1f%7T@t\pU|֤O>|>-Zt4cmp2خ-mW}GU #..mqJM&$ w>اFm֎`ɧ}+lPM̒uG&4W 1ے=`[ϪNm9~znVɪVVGV7Vʠ>7_&ן~՜9P< ,W^EK}Sȏ,CF"PإS@, DN WPe<9:z-np-(/“ŨGZeMɫ_tžOJz4!:DHZt98+Bۭ<Û gg΄(ED ,)!Y 3z>lhMZLzUKuךY]]}۾(8wܮ-ws?f}].F[8^ۼѼEoXkJ$NO-@V]$;T"wj쒫Y%=ޕNϷ^ -yk[]᳡G޵:[jKJ:'V1-Լ@j9KBwySBXtX:LwQGK(y-S}UM3ֳOfJg2hwN9hk +%T3fb.4zF+U%8Mk ޭk*VT-'[[w Y 73S{%/sZ=Z&M"K$ F9W }ɌUJ541tu2Q'>NE#RŌ?XڜW?x=۬?F}?(юE@uzJ@;h=Qv*!ed{e/Oq':p@ FTIs'S#0#ԁ4|~Uyr5axJ7+<"M._},5A-7STܕ[- #/H%~&ktT-Ű` Y)Z&gыxV&.:ӎ๺_`|4Nh@d9Rw fPc|+-XW/a5ܹ{>oh"Ɗ1 .j8I5O\RR|ܛQ2EeJP'Pz>)K&5i)"qyz/ :W>|[O%Rc@콭QH- K'@Ǵ^0,OiwǾ `M~0ɰySB%?TM^\W< j5{UG_Hn!zYvm ś6k+D{=|\^-6o "`QtO")m[Qܛ 7}- b! )[=TaaF2]완@ЂvD<zup$M3Yf USΨ n=3=Gm2*f |2fţN.%цyA:fxu@uL${@&eCgQGft'cH, 财ov?蛖ZFgyuf׺ \N@?*d} 2 SZB H  W #[Z`m}<odje:]䁷[Ajͅ:!!Zǂ&l Ƚ#)-X&s7&~vW KS8Gx$`IUB5܀AJ!WTHVUJFDO7C8RYJT9Y1#"mg ^JԞJTr۪J,{ 3`Vb&X/'r1%@SF tT.[VZ#arNN33cH. %}" FmCvXP׿u >iW5GKzgӫ|SN[qޅn\2  7mcCmN!1DII^D$:D+e1V Fax$yw?D]`) s$M,P{kK1 Ir%Ýb沢ZyD]|vkWW-/">;x,A'`'4*̜FZ ]gY改9V bNۋ9K.409`mx-7t<*.lV8c(^)9'2PgzLɑucn55IXee`uI0;4xgKp-Z;qIMiRBD챙u_Y0 @tZT!d0FEHt(" ж&$"-i2P NȃBh01"b cviBBb$!QE1 BhuDR,I"⁸ A /ok&TWG9OQ+i?F獮 _q&?ԏsK3) 'YPJC(>Qθ#C8d?%g5Rr ga4Jql4&xy> K줒JH؛j.'7Z>|ÏJq k*hPC- ҷuHݢ~m6B^-ʗM?J##6|<VhY&ՙ"7q~s₎taF eV4mݛ4/5vQ}7ˊbSzNeAdm_s7z;ohwT5d|HJi<,owjt5As,pmУŘx|Qݭ{~}=?GDY\ Y2)ߖc$Ug_VNJG=vQSTMS5S1b8y܃1HH~xO-{߾|xk:@e&y2 K"o~~hƛ -bh/ۜt0 1.!N/iꋗLJ8j_lVGH4c) xNf 5EY: QU #V>%bVv6ҡ 3R'D"kjbfӸ^r(QgVI)c72@jrLZB+1jܘ4.91U-7x A:90ygU~=P"(Πב[Bt9Mv׊Iĥ]gD.Roq#%^Qz_.Ұ7vEBBSm/zߦE }]'Oë/ "E䏟(7- ϭZ-'크v%a1 %WJ5-&e?(Ww-T\J%Ê)C*URrJ*Ų,*T}dx<<(9 Lu2Y fVXIJcPY^YA)[^*cM y8L$ }JRnF͐I87җe f-mѩGR1լ.舙[\|4~ /m<1ܫTyisU1T'٫,T=S?;'ɧ9 :[ '2@&i&zu@VϽU:29m1:/:ϯoDܟv=يmnV#Vv|1iT.ֽIK!HT2X* 1*j&,FerLsvQ<׹dy<ٻ߶cKi3&vP7O5E*"eYç$RC)$ݙZG"d18lerr\k!0(Hr`om/Գ9yz|(b(+3Y qȧ;U82h\N=rg ly+Ue ^RCZoՠk%P-0(:@:QN_>e㵍ڠ [RλlLNɥ0 Ì7]jKrEf1Z̤5Y<j+GDFI9BBl2 M>$FpO,)i.f$ACHsdK  lJ j MYM`U)MNtm8b~/ΙڭqǺm[Fm{@QfJ&D:AF0LkɪMڰBȌnHHnVJ@ǁ,d yd"HĂIHqL% s>f,QkL5rƨ,36}[&"kKuGA%jФ0<` $,iZ:!9+T>K4՞pqPk:[uqQ\|i5h%0Qj":X)U̢x2E y֖n&`qǺx([Cn¦Ƀkp3f\ Npg~DT^=>62.@Sz`V OmDͲ0Kkϓ0 qD㎐v *@w1'!s4h!%IB)Y-+5rbCdWYlȺ&nR ,KLO;g<\3YjLy)b KHC2c\ZBP<$e1 1א 1NEKL0r>H$B'HF<'k ! b܅FB<պZW E:tYrGe{.ԓ^7a-lWCۣ|D7 %}V]G_4G$S "X[7pEeZ ;W@6;շWhl"r-}+4Iu*TZ}oejZ+"X1  6]7B>'XOm] ;25^ \/>epR3!~D5kx]{iRewͽΘM ⴴ1e.^5ͫM/Q{;i1+Rٴ=~xRu) _Mint{[=@?,JfkuspR;kFK>T OP0Yw.ѭ&K<1{LW\9A<ҠLJiYr.JKP;_fP9F}KX@\*ڽ9#j9WWJq'zJ8ꈴ3Dp$x}+FfWOnz\\=VnCv$C w9(lI_#ol9N|'\];j#':jKa%N**PF{nݠn<G|nJ/vr9=q}'4[͏/?o~n~[jlĔ}Uz𢚋Xy,.BJ\FȐρ8XfɃMYɳN* zN@777sXM/_LaO-'/ލsC?7l2~J\hYҔ; \kI$/zk!5H_F%фhKZ6a|֣z^77rEv8*^EtU/_\OnO+8|4>6 ԟW~{\NWɵYR|"忐+a_B-]W ZoPVia${\opmuT$J%JdT&Q ɝfNjm- [̢gEϕd1f#1îz ŢT"~yV'm+/ʾz9Zv:}JLfi6x:[Mk0s$&c^ZR{.K`Ҕpyn68Z=N\bJɘJ#-:_])"Ƞ/F=ggX!k4EiRh9*Q & ز}fEF80ާ$8}沦G0TLkѷ]~J.c[%۝3Xk%?t9ͳ3 6E33B(h,sdvf>Ǩ[͛Ɣ@K M;㌗I`q)Rze⭑0EJ %2F~Ĝ\~<&n"S2ǫ$.K-:WT/ge444(HR 9\IerNr Gܐ!P6d}K&.2 `Hx~_6:`Ic[呬&mbr =>%7иu8pP G.[tHqp馺p 4w4riNN_QP(t59yI811gbL խ>F5lQct«bs>wzp6[igc;9R_ͮkbvS%0%ee͈XyfyKBS`Ų{ݛ@fmv;Ç:gwN[trYj4Y_eS=Md4>_6WTsU`?u_R9{'x Dk|_rU/fRժ.3,!;SĽX#Q|Ưh}*_G]Fй8#!??WoޞraN_۟^:'a T-#AIFA Yim5MMVh6{=U%n1m7Bx_ SY4/M<GԠm_lbܑAomTDlTdMiR7N[:z1;\~_uc>U刂R}e!F9O 7(H]Mhn~}{K4B^vO_u۽lzMkNYjB{Eb{дIR iVeBXhӟ͙鵠Rq)AfkTXP;,ˢbJ%orעhK.k~ym1,A(t``6!H`"J"h^"т41RJWݮDBJtAZ"ˈ^gmm2dA=ƘZi&sթ[#9BBѺl1/90i.%—4t zSXy[4A|&(4".3-?=;8 "w[P?-c[H:{W |P?~Fl[ 3ٔ2b&qRtNA%hHTD `ZU?i#1ȳ)R}|q4&~tj@Fv‘kw5<3^v;37! IZII9̢h B@2("k9)韎5yQ M΀&GX} h@O9xZ 0b,Yydc2E4CRA(Ŝ(aN&9xujsF9^h!pxPN_~aL_k{vvKDWmT0^rW-PresJ&cix`,(+ W8?ARLzu =@h'n!M^ c12QzOB5iƁ&H a@u[2'&gb Hߥrn͙BKhOG6$} čiGeLGܘ?Nc=v2z :2s.P@怢$QK-9+JAV_gbD<}iy6k|j4JD*@R}[↗A 5hU6 wKZ| sg znd֨"mЍ+y[ qjV YiW?OY^nGiGVNG{s>uͼ* ĦLxLRc$Nj3io=mbДafԒRÏL [P e,!]Zn5ۼVκ{qH{Wkwm c޻Ef֪vvN u _^ Ff@>-Օl|>dIwL3iWG ETU)3o dA>)L~Zl:;i"-T y.i8|dJ}>ymtsi3 :qRӽ9gnbwV/WzXO)=MVQR@1gz4‚S;zl6r l߰f/Oj2>3Z/ۢh+7iIF?t8]t\ҊHJ) %x+QdJpp_h:?j?k+Z6Q!цhbN#)5@(rڤBXYl9&0|*mU_fC >Ϟs)>f7t].ȵS$+]^?1t|h]B<ԺZs0hs~5sвӂ]z7yE -Clot4soU)x֝o؛p9,dڳY0dXlbMw)4żc.ܧ>ؓϖ[o?mN=VGl߾ KIa2)u)5e^0j˃of0^~_92{s%%"%R '!b̨&#S94;JtEK3&v N~1Gxi <uhnI{F\*&51y`B( ZH22)a/xfz`x`T;NJ@'@A[ɵJ8rv0BIU^қ1hcY %a|LQxnxgʵJg"uX&tV AIU82aA1=t<F!s,ZI1@Qi\  # BX>:gB<]LH|C=8ybxR/ INEH2 c iqL免z,5mot'^j m. m"AJQזݹLŧ=K]x.Awaig!` hEu0sG2G :(θRz87}d*w 8',yVL}aY&@r^ks݈Uj+P/._AoO˛& % xL ")}~cP|S0~1,{=it#D$sbn4%`s߇*tJö㒤bJWP)JS\OO?ϧxQSP̗bMjC4^dy /uӫŹ]ƫ@th/INTߛy~}:@rWǺ?o8:'*$П9&GΧ86 ๹j֫{˕wA a2XJ@]'m\f y;y^//zv-)'lrɩ7{W#Ž$q&ܗ}ؙ(Oo>82>MZ1LT.F!^|a)O٣|}}Ŀ^AB]8;|nx9WN>φ ;|rčIF tmaz>'Os )q;fMhIK1[Hr.lxg Q%YB@A'p>|vM~.DMUo7݌ Kc6chUc>?o:o^ }n}ԭjC?Go܇~Sb~ySP`ob]Kj,o^v}=::#aLvH7ڭ±[/ V}y *x4Y1XLB}cxnѺۺ=tWZ[}MZ73Iu4үaZV՝um|@WlR s2|sŃyDDMx~vǬ^k N-"7^z Ak#>0i"<(to&8y'?LJsۡ/xnl3n"4  ʐ ךRόH=YRRā.0:7o(#^^崙:Pzc3Nid( .*;sꗴTO81(_V G׀Uj(<ތr8݉`Pp_~zn^Ga'r)&~Ti΁ݨjR"S& MTO-3p΀AR6lM&{hn["ϖ&/4˕\W6AC?/0ucvl= t>6Sֱn{ ?N~#Vk^!o4gWIi%,x# 5/W8XA'L]٥g? Y DFS(KVB7QgJCG:jiq.@N%eL*$ <#:j4¦ĜS)OmߘFlS }C9=ڇ|Z|VZ%yAa*V+{ZAFR5B(Yzw"ztHmߢ:BQ_z*H)N4 !)ɃކP R 2HPEUΓdJ# Fg u.:SuLu&Ô9qɥgsߟUrvŕjp[_Fo,^FyQkY_F%WJuKo˓j0.M_eXU7iޖ$"ro5H~SDtjU@ pT"ʜ)P+G .~.u甀0+DzGKSU%嚽A/?oQa+zj"П>MFO9lOtc1JȴWy/|mia|b_l\l.7nriho=aOA!PBYj!,++"@*պ pQtIET., |۰e:&0/M*0C$ĭ"G-%  4q0;c$:'-`Zj6~kXmGf\/7;燋({ e4<IJemRw7S-|m oaMLͽϞPaetX޼F֔غ|/V_#< Z.qC?/310U:_tg(A +HTȝ "tJT@A!5Ac)t8kOUUo?־slB⻾̖2b jVQ^zo+>qmAE0X_񏻛[S'/4/L*la!" pmGeWqsIbr* AoRE9cjG̓H4OU7Q4NScp@ :g(@,N%YíN*dn Ӱ\6zm[i )/2Q}Ye%MVwy/Ğom:wE&*+ejXt]ͻgenoNi1;Im,kcszzd_2ƫw.ǟ[ֻNnAͲClY`ˢڭͻ5[=/?yuߠ畖d4ni9&{5fU>q pjʨwTRzܚ/6Tܼr[9j߶R\..UIjnnF@a쯩74w73U9}˄Z)eDM Kba$p)0'xౡtUx1q]6 &eF2 ,Z#ՁHO-21qP:%~b,.38-F"XϏ''f=`0&ybdFp,)#0ePX8F :pr; 7 Dsjn & 8OpTRe̛Č]V]p$ IU)//SD%xtqCɲ[jEO'y7=ĜPFqD#QidHD)wtrD n! (XcOut?C/MFw]r6ZGjǷW~l~ø|o䞯~WȜӫcfV죱6h th9Պrަzt"# 7vsG!jR0/zTuNcI|.WtMzt vP"-(M1/&wT<Fy`:g_6MP ͫulM&a7T m> rV/OE&Yf댭h.P፧:c\kB*y+`$S5r8r;ݍPme* cWG4 LrT4zU Y/*qY'!z%D/b.3NLfY,&RF8߷-ɏX#m% bbHC,1V8 -P,A%P&<ӦRrr ]vA&" }vʎ/Vd> 㧒<NUfw΍ǗmO?`mT h6[/onVR(?Q0;X/f{cm(IG "c0 A,zRHjQ/Fiw ǃ,3&Nk65iUbA~+pHӋ=cz_y̖Ty 8Θ$ VvƼe,`w;.6.l"MnMK/0Ͱs,\K {8̘;7ȞOx}Qwͧt;qG/?Nw+2wT|ĥcfdVhۘZa&YYNOՙ{k t9@Ob.K[(z-?,7 -_bfw.խ\6P˳M_֒9Odz_~OcQSEB3hd>]Zy9ܴYL~?R 9uaex5~Lֿ@UѸ'a; 7vÍ$*&Cćf=_j<$W9VgsFV9k\:(k  Z۬k@# x"CQBfSIp<%!f].$%y/D~IxFeo"e%6zs糺3 ann$#f3q]|+SG5bt18G:$x0PLYjJ1 ITebr堺"lBmޚH$Y@Ӥ)ֆl&e<@ 2 +s"SmkJa֖F`TPdY  yɔ&yjI뜈L63#K]PK[! &>%J^*/|H$<;zWb,AQdb Z.)7֫n`1fm1u*,jipCv|PfuE).(e[g)(B蝎mM zB׾t WGc[u1 gn1wPfvU׬^կ9RQ&Tn)U-LY7|w3=οd.1d$Eae 9j?e_w  Dt(jvv+4}/ZYy"JZe(S(aK!IY "6$fS<ͶiJWYRαtZ.wU U0#U4}>ht4kpN~.G?.GްA?=ǃ~y ?xO^>?w9&joZn%r>kOyj[6=ܠ頇_߮탈vr]ZG^ Gn0 sѾ>:&F۹[b+HRhڰQTWFyj(y~Π3tLHlXKiϧmt8lyvZl;gU)**Al;j@,@1'Eݚ k:[}1avi־i|mVv@NwӪ]ÄxmIQ|=W<*CQ[y\cj,PDN-`sikXS@%(4A^6m<w5̚7|biu׫|r$;Я͒1 AHR}l-%r'L!d+|95(Z & kwXE\]20wKP cNxF97ދOߎ'kM2ˊ[]׷|byFkL(t6+_gZݮIrf=.vǴiLi//S}`MoxJ%]|nXT Dѕb"3[.{#*݊"3.ESTJNGL.54z1 6MbݞCmʭw@b);O?sse yX9O>va(*H78lV /]g[f@s;37 x)H{Q+2f(T&gWh !lE  Ҕ$X R"C)>H9 ZY 8tĹ^/Y߀O'.ouկxƦ,+슝e@O?ic xxɰgz"DVηlAZz+u 1 :Π> JfP% ҄e:d%D?O.9b%.sLx?,uDȚK6vmSƘR$U9W.L sEI%esCJ"n ?[7rt[f+@\Ցl*,\* OYiZ鬢 YN UwBKP)^F֟r]HXIF{䠚 +`\zUeZJ6Ʊd[ 6N6p =`cxz[绺 4 ߵs<:&^S_KyAQzZ9y BbE!􆇹 ̓?dq ޕꗄ7'VE-Cg\.!)/uS"e+R%~887z1W b   Wվ+3^S?ٹӮ =;b_ɇGlbA8&'AHi,0HE\5i dbF*pm޸/Z{ `@MmZHcqDP` RAtCs=b811vw JM]DJԗyCHU74/@﫰D1ׯ7<~/]W/y9yg/_ňJG6~>a}|F̧t;q◟n'mPw+2ӭTLb7:f$Bd&2VwzсsA{>aR|Ґgp(v.ť_O|oK^ i$:lXi'i/μvC-6==xփ>7<}K"MMs { d2D}7b Gŧ43y?nt=[g8AZlϜb#0lC-w-&IH(&n-[WTnQR%qBD¬(F/l4Pl[ak0H5лyl߽uyp֥_{?0Y]zS~v]v媞z5vc:[ԐFUq0y%go?v0Fz#jwdrP& ` 5>|zIv 34]"v Zz&ϐ4S{;z]+BǞ5y9]\t+cӛ @E^iP*vvѕكLOW6=(ə]`Xg+tUJv*( +c]dp*p_ ZNWƴMOWCWBQCtE-tǻ*pyg`AtUPJje^uAK[.;CW,hu骠4wutU!7䝡tƻ"T3䲧3+eȱwU *_wA誠؇ϑʺ88]ܽYɘdNm\,kÀ+ɮP^0eYڭ{H4Uy3s^hPP"3)]؝L+yW誠=ծq4}TZ:CW ]rCtu;6`u v:5]Wx5^h ;Pm]ٞmzF"]vb+tUrv*(UJ0 :DW䝡WUAkD骠3+dyYV3tU NW=]!]:+CtUĮ SkV3h}쪠ҕ^utE# \BWCizJP+;DW.ǮUAtUPn3+[K{UƪμviFzI_vǥA^_ԕ7 ղ%P߲y~F}|9rAz+T\[aP/hٽeńPRrN%Gh9k}TA)DPYΔҴ֬;:CWh}l۾&=]NWvkz`#'~N'\~mDlwlwZ\!M\+̱;tUJ*hO'( 9Oߓ_>ۛA7ySg,xqe)oGizDU:P\|a;B\ÿyn57 kq3Rj$娏s7D&Ɖurx3w~f#dzRo<\ ޽#Ù;+?_Gn|D-*n1_yy2}zG[m\Pҿ|?J:|EW2YI-O2._1FO)n.pɇ;bk}Τh>a? ڍ0.UwGFdKlMi Q9LV q y(ږǁ؞<4u"h(B|$l]FrIjAD]6&76\JJfnH_)ѽAprDwV8J{vc0InGzͻU>D1!۷Z1)Wv`B#;n#`MY ҼYCMe5(AdFO19 $d$ )!f&4lyPӯ:i4pꙮg1|{~~/5 :,m^nFiYW}hi> QI/{z}kksϔTd[D绒(+?r4jԕcƠLL3!CYt`$usnUoJjM̨Αϳ2d8QO 蘭ѥhrFST2T>Uzơ\hBsQpm~[<-a:|f~rh8;x"!.B6x%~Ih-y5Q,+c/I j=GXxO,PhS(zQLlŒmBrHm^|OqCbmmڽf)2^%q:Ȩ!%,¶j6,2# =CQ>RiI!gH*:D7AHEYFHT'\|BMY,gĞMKp͜4*:BG&Ԥ0(ը"Y16ʈcp$n H.<J 9j4`(7̈!:QS{Y46:%"oŞ\)%H+/&(&[*׹ZK.gHC֖.&`aϋGŶacP> 8@a/kv?l4-^%˼iuIp}㩲С `-;#r{ւj%g#2(n1s2H#-f)wN% @ie!J`ɡ3_l p^Db(aB̘cr8IR2pC Ld*0\f@@cpv۞z*l~oQd/mxIrƐ0k]Z`w㟦fz<: %!qo$KɖC2 ,DXrLkvY-oCys¡|z2YR=m>_\qtKj;Q ˏx6?a2f5T"U"V^P!u&VQqWMZR\>TRu2p~=מeeIGa+vvKݲ%$Ydůů e & |J@}0XxTQ͍QmQ'(<_t>txcjfaAh[L9I >yFů %6D(C_Q' [O\1@6T͵o#'<"E3HchX:mDPxr6ms8ǴQ<,K%gƊ Rb8Dgse'A;3/SAI圓4 b PÒ)S;;-6̓pJkdӕoa#Ykx]Hd \TP, QF+ )c,Q6~5$!fqWɚOsu22.|w=L),,3:iiwߚtj] .& Nj^u>}/^VuV'O' .VyT=ִ^W]P]iv"dѮ Qs:,H=>(^"p__!@=F}>ZGq&z-6%&q`YE7<9E*‰" HJKNrC&Fzng\ct89=b1Y`K)VV#vǕ0:PqV)C$L3IFBw؜x$ ҝ`=yI^r^Um˄HʑՊsb٬-`'\H䗣qK%nͫus3^ekNʟm*C xq ,:bҜ828f3FZot;yLIcBQAsglӕRUt鬇{ߋ^:Xѷxv뫵oS9aֲ|!˒pŞ-B_;R"1t&8%R>%R< &X֎FHy :hO~ 8+)=89S3 )3v9ǘšn>\lJ@|}{v;h:/݂~X:b{?,$ <uhnI mRP1qBiʁ-=*|W+Z0i.oԶU ϧ X4VyGYXQ`u&BPWr]qHN.6=͞c4|& RwpZ+P.Hx8Gh cDAsks&|0iw7.(*輰8U/ INEHD´A.]0e|v)´&8{VQr͢/:Y&CCz(jڲB4!ԄO *t xO*L_g{Y"mtybLl4QaC3n2V?\?~~Oy6vgƓN4n:}b3Ure\$;#/ƟOhTrVy߆ޛ 9}8h]ۛ^^/_vg#;\2;nǍ}ǤFE 60N5'ĊY1T-WTfuJ߮TU\Ӽ;W#3浈x#$2WDGU,&8ץ丢~}QB 5p (㦣7q,V*Ʊ'ʐ ךRό*FOlpV=n^3J/@Q767IKz Z)4 ^W} C)]OjoG2vt忺HyQ$LSI DžotV^⥧/)} *"oߤfia(q/z8}B^_ ~Kn8RVOwT~9I,58UIKP ^ZelsFtQ>k .TՂcV Rrkt" +JⲓaH |*I)YAu% $'i)ǮXTWpj6x ~] RH޵Ƒ#_)vl| \q7o3sZhY2$=@JRzʮ֣* 8IW訫xcim{]_[}]FhnZ)&M<<uᴌ|x PSh͏wε-:;Wl[:m(y\~{v֘8.'S9P?mYd /߲؆z=Zegxp|BӼRȚ#(?)[__]^7O085MQt abķ~rG zA4 䕹hz]Ϊ@k~y$4}4-yr ,sEWnK+ dJ-2=R*~E/ /LWBK_dǡ={5=HCO6XDWrjh=;] !] ]iGJlbjehw( ] ]J pI-F] R:b[`S -6Rw(Y֮WS !.6R ]"]9CW{(- ] ]!6ѯc/y0F >,w\V.h5] s,4y !VnU:9kwVD%zd5ٱ]1y)uL1Z@,Y#w"#UFvA~ȹQϠ%gR{hpy2#m4Z +j5hyםJbЖDWp\29:RKV=] DWC_)2Q=Co׿zR{J#J ]=xd =``og7{(㓓b0CN+޻&svIgzK>=(lg6{~ Jy^{wNg'ygPڰٟi7/1[貒ޥr|a>9M_䃧NhJO0C>~ߧghnv^]f~A(^ԗC =#PQżܢc>O薉yBxZꌯǞpEycf{ !6Gr8fӏ˱%͡6F.,UΨ(GB"W(ƟB㓷a=GcyVJXWm[im]tOT]B٫5`g-%e Z1UuzgfuN)4j̞Ѕ1*UeV[rT +qTG; SMV4Ғrg?-4qVPc#=3٤SW|&˭91:DOHnNseE`b4@.go߾?ޭ-5k|HydMs85z-pO= Pd0v!1-.oL.蘸pQ-^rJ3 ф`~uՉ&ShVr%;І[AѤ+C)玡(4ޅK@cVYjOlQ~B#/7hoz1#.UUquHNoT U86ʐvND( / 0`7_v>zlǏ׵*ՂYeSw mCka&a-A7 /Ml#TYǪd &rt%C2 RGU!ˈc*Jrk=/,J茸pQ4I"iyu,C 5tq,{Ѽ<0{':AJ[SF#H܎m }ïϪd!T?V y E*U lZ .k$ S`u0Ov?NN֣ټ ȓd}ȅBOeL3fXbM=A#%DF$rm !/fsP6DRD :8Blk ) Fc*2;"C#Kp05n ڪ mry8|_t8^§dM{q|vn8׍$aJ n=CnIf=)SZ[`w-> TiVۆnME{)Dҽ$牐zhPAIB_NoUfĞtj7LJȀ-alM1WdsC<܈ܢfI,WR SA41K'4C)wkQ͛F1lb _j+FqĵQvbMC57Y1wH._ê Q NyܡzE5FHùE5X{aw:H9-й(#@6*RU &ƀ46 ml`fJm܁fzP=Cɗ@{&Sb 1KCbR 9Oku] vKixZz%!@6(=< VAP8U9PT6m4EO(IBjs4CS3&k|TH֞f*JOQ!,6\b$Yn5UE4XoCn֘M5bUf4 J.ȎYyЬ:)x* 1iPb:H_P6u<D;y0B.85EOT~klggW[7{juS٭AC)zӷo?bR:ux RGʹMݍ_`ϱ7Xxz%)6حJy rn.d~^]5%N//? ȸFqkvvq+tykE33q˓ΏݧH~*jus m>C\NM[O/W=_9ޓ:z9*5#یr+ӡ+!1M]!`*Tc*}NtӝҴS+)1At%%!R42\BWZw(l آl0I5Ҏdp2OG(u> ] G"/}l@r>D譞9_t"XQQ~4jxXl>˵,rBۛG>^S~L&mvչˇKQ1w/I,tСqj݁ߕd7i X̟iLtѢwR3d'9ʜmKg˔j:}g//w[rQfW[ EO|0" ίo_=Ldzť{smT2 ۭfMrs014>@D43J>4m]*6ͱ* "gԝ2ʭe-]5-w>|`JEe0⑂ȇ5st[:v詒M  CWWh;]eZti"h]eF5zlg0ם2JP-]=Aj BU )t*Rw(itt DWpEcbW2uЖ ] C ֏ w\ΛBWUF)[g)ҕ4 %m]!`hc*)tBR ]9dm/øJc@Q_*}VhaA?K(S԰0q9MCY))Q },D~M~ dh_KnmӦt $Rp]ؐbgn#bnG#{mGỲE .^XSQi5mSYzMPzpT#+P%Z3+ɯj7g((cr }4,͠9 A\e6œhMM!JN5Oѓ32ݤESU'G@u*ޟC]B@: %]t-];hM+J P;NGW(;]QJ-]=AbTDWXp}RtQ-?+N0i]!`Mc*)t*Zw([I0yA 5S)th}*v -]=] y xp`P͡ W7&vhauBKWO$Q+Y^kB6ƺB SAI^\]`{X/h9@ Z7hT1gKLhQJΟOq[[.k;D+E\9@p5M#7̬ƲÈ Gb9wu;=wpf~ud>՗,ּK\cI'LsQBY~/Q>.yqѿD+ya;~JE-_z]R䦆/C\oE ᲞUw6kZv>ǟV5NtwKNҲ]=ř7ݩs[䧃bInFQ/X*P&*qY'!z%D< QKs]os9zmogXxIq]RW^X&L 빱1$SIIψO9Įj#ce?^hn,u Emy\5N̹ׯ-7VG#77}SX\]?ź&Dɜ̶gGr|(/Gï:ϧ*}:5!\i{WFiٻX6h_ P bO4w7LؖPMXtpa,tLcwSa ?Fw1lhE3].eof\ʪ~gD,b6HQ._7hx>{AI Үt>og;1C ^lS [tO?Y DlaR4w+klm gsJt-U}F@yW#OnTx\, 9^,AFR5B(YND k$<EG,GL뷵-Nۦ;"Nd=c~(NShBR'  IAOJ7VPEIM:O (U+bԚ;  ZMI2Op?oll%l}H[pR" 7Xȓuߤ+[D{m4( xD r(8ڻGRӓ sP)c$pP8j8MjYmC$@ 4(CS0$udp+{L:ix⚒j m44T')U1xuCġ+VdX 6@uF|Sh٨N:TN9 $\m5JOI0iTHR@!$᭎HǦяⴵCTy} 5E3K7v&7ܳ,{r 8|CVN>k+Go  x-7{(סOHW8w p~?Qrn8Q["qc p&_; owOIRXDqҌv1;œ zD<Ǹ"%ZC.f𭏓I$8dO $^!J NQjtXa!F=TAC_eKǽ%Ɛ Di"«'V9TAQO=ihR(i8GqUd)IT`@QAp@PUPVTZSheV#?&lĈvNh(e\4{ E, VraZ$G;Ղx:Vwi,N/#$~R[^A+h? %}0'Kn|)<#' WT[fƵALlMv% dPIW׬mGoC6(zYLWYW'Zn\LȏH,3T%ʋhH$|ܠ d_@}$XCɊH`D$$Hf (dITy\ A-7Xiah =rݭsC٘ { &%)Z3"E祕 O)(Z$Hxd27 33R'J3^%)@7ZϢ"DEN: xGq}r[YoAIB"V*u2Jͤ4 ws?{궸GiIh4Z)Z*nHh؁pB`Ӝ:lDZ!Y*iHpq 'H<f=PڠL[?[^rýFC泬;λ:2{;%{W %^; y^y7 ; 8bgW6Ag˹,~hðL4!PQH9SR@`yrº=Ws}זetrxٍCS,ϭ%;'$:c#@Ipg췁?[fe Yo-}1tXn[S?ϝHo*{IoPԽٰ[]Ui*6#1q#N6fTnsM˫;sh V!_g~N.^mf;h b ~pp~[n6}ƣGP֝;@Ⱦ}Y5]l TG㊟s/Y|<~^utsp98q\몑jVٴqґ\@aU×jغ^* }~Qyd}P=x6߷:A>?zA4.gEY tL0\Գ#@J[Gz`Ê\J{~c9R"WJ2!(2whkiM) NkbbCL$t8O6p2[]96^rݬj<βQ|]<,V8+>N; $Rp]ؐ6&)`lsI{To9~-җRv>fŖk zW^r az[_cRty$.Ї&HEKѠv,9->d˗d5lk(Ξrq-ξmw/ZP,Q( $$AYHj)d(YIMj1QC/U{ {Cj˸zpɏ<=vu+BK&/Qp{7_FSmj[a{JG\R%k=-wm/b^m_'y3v|mGzc!x;1-UdQ`* Vs` WkWe+i)+ 0LSu[*9N(@5肩`Yf0+EWEhGx8}q*T^dѡFq* qd-:{]Ii%PPI[IPAGt#kC`g E{}Lѿ o]]r:b.ۊOvhe71;)b'%s^bVAw5|+"x>_|_qC߮ol$y<1&RYSZA>A%C*Q*B" haIsm W7D!DL&zTBF`a^:V{T% ߺk+sv'y9 أ)r;)Kh{]~ӛO7O8n^,:'G[v@ZۡTA&((?}l(QJC^( 0ԔH[1Q |qh/ILN>ؖ}J|nC>Y"R"W.L sEj]9{ ʭ}D2ٿ [&r9L%h<IR&N+JHB%8ej;Q3G&ng@k=[Gex8FdrP%sc̜5Unδ4ϻp1ZEW%8ya4&+i 2'\Wu^z6z\S -C'·(*Q:ʠ@hpBXdb#x &aET< Qi'nѕ/TR1lKbꜬKi(۫9WiFKc!XxP,^ʌ{r\n&OLf?&ؙF&㘜!c#U6cuU+okHe 4ZCs{Q¦.֊C+&( dKEDZl6$池vRԦƨM#j vfFzC^p2 $]iUmv(bf5y жmVbgV eufFRѪ0Z&f>b, 73g7.E*Nƣ D[D\7"∈}%ޠM|ur I8sQ9&B` ȩ,m)ݦH($:R*΄"BJyFV NBT!̜=Jg~ԁqqѭ&1u6%/E刋#.nKCs(юzIWHkJ$e8<+D0ApXx)x_ېZZ\W >d7ȥōn'}Ij8>62QЮ)9™uQ%KEw:`^,LQ9q4! #*@*R=XACPN5dQ2ȔƖUd}BY#xyd׋et!󋸸dՓgc9G9|ӏʉ,#z/J2,Z vz0EC,ZhR걲',J9wqloJ~NPwڳД 5 \%.@MTe%ǐkb% Alg- :PyN" HHk4:]( _) {Gk~y/Yw0]ݶMQ ?*wRN#\寗_mR,Ѡ-i|ґК8rXX+x&#xUB.x)Y$#&&rk^SA#UYO]msߎWV`;'{~ڮM쳾X-;^w.{Н[M[|]|%Bۻ~6R9^kvN;:megreܲLw>W{rxE8[.y1W{sWnwW}?项uϭ]ڼi $~c%yR9 VY–?Z(u霬JM, IJ*uwtJݾ̃l<dH2tNhYIהIDkT.R&%񆪣b2: rCЖ1mQz~Xڞ1izn;k^^>tjŢ7(֠1)2AG&5#1&C֙0-l0"AF`퀁 S LQC6@d Q 'gvTmڦ"E7FcWY sڿW3xNvR[O﷨nWN0,o ;0$OԌV u,2(ED-3'- J-e4`9rF>et?bm|1-e/N׳~"mۊ0_Piːk<2;+Z?: :˿$]M.sKyrZ7bۓ2]]\ _a>pj9m,W_N'lMcKȯzEÅ'rduzn{|}i~Ms8wlrD!{?lq8WH-&uߦ/P/ˍY0;? +`v8?(6WW+4xZcؤU:LGmg&_ U4Sk؟u;mV6}5nƸ[ݚOTw]K7Be'(&28_XP UR9FX⸼@~\1qsr/ݮZ yIaX> >/@K~ͯܨ4po 7ظW_/xl9ê+*.jDơ'ղ$Xjy<4j`(àpEU L#fn )*9[*VG}Y?4vÊrUu\٦<֢]FxB#7hnndgN~X۹^yw5tH'؟9N?~UkB=Xh 3O98I?'HzFKr[(Gr1@O fr+Í܋vsJg2JXMtSvec%gI2+ _4e]}3^Nm?|v5 8⡳it h@Qƙ edc"Rt_}ze2\h#]WJ).,:B]adAHW]n^tx2JvPW^*;ҕ@?A#EWJqA]IGW0pHW])ntԋue]8Q+N\FW ]ws՚2.:B]I {ʮ$EЍ+ _[OוRưue[6ڿ7.4.ifUruK '%4yN]*a{HK"_ek*!ex߃DX0%XK98s:* m/ٽAQ-h林zs6r%o8!9N]-4-z}%ā]WFܢtvp\r+&zVf.ÁkOM5Ry [%v+'5 Y7G?IWFa2ʹ ]=(tѻnte5y2JEWG+j+ב8>d۾teBh^{erQWΪ])p~teh+Bp] #s72\hF¢#ԕtɮ ׻^te+c}]=Cy f& 6򉙟6v\k.m 떵3?aRʲϣw&r;=3 KfDcC7Q2s#\ϓmrTOm>v/wyrԍ]7k?6ʰTQ+.# cW p΍ axH!FO B5BoxBBO~&W7ZnFc& 66(3- 3OgO!rlڽ1IݪDݬ08wMe\ ҕGFWb/RZVecէSI~4\: nN۾4AWicBĎte7+ 0w])Ӣ#F`)2 p+MqRJqaG;ҕGWK](Qꊢ@bx{ҕn殌Vue]8&#]0C72\^teaSFܢ#ԕ$`i0].t3n4Q2~2(In2V0^2K;nZ+l%Yv(I5rUGM!ܥ9lB# 7nnn3g :fnA3܉Fr,HNi~$g@HGr1E'=͓+0pFW+L<m¹J)q΢unGwxj0ȁOč]00K'Rx]Ww]tG =Jc72\FWFq2]#]pnte1+'sוQBZtu. aG2p+U_]WFuE#]x ߗ W *-|OוQ .:B]1OՕ ׇ^texA-K&;O=v8M7yzK`LsrYQtyCaGѕr78w])er=oGWcé Ӂr9<<6(F%j EWzˉѕEWFue3{墫z:ҕЍ uN:s"EWG+eZnfKs@˴Z_jZt> z1`nˈh}DޅeDs#࢏s+ KGggf o۽ܑ}&"~& x2ZrwN8hJ$p]^WFq7+ܱԉcwMw p'OBF9 8AWUB#])GFW]-ue!-:B]8&'ߏ R/2C=Nae򋮎PW>W^uBnt 6J)en,z]H33weMve^WJdWǡW=9t_k ߽ʮQj9{w'_\XDw ǬE[/~oz^ lӘ{yhPZC@ݽ?ׯZ{wO_)m+zW%}(Dy ѹ7k-uX^]k<0O"1=ͺh/Hz~6_|:cҖ~?8 |*&+>X>Ë䵶qܕ$kogzՇwlīVw ]#UQv"@eK;g5G\)PAQ!hA7yy֦_+Ai~]LnylH-#1=rH53T@D`ExA dO}GBy^os?nu5Yi1W痵=.e1Cq.(5QRf5D/4{N@i>nTף>?!$@w5 B.޻\0T,h J.3;.60z-$~\h@KB]ՀpIK 0Z̙VZJF}҆rk]"֪֒jH@5X!CN>'$ ([iF\޼yw)նXGRFv.Pő"A3)n7ܜS65sm;җ6fQ@Ys̑$6~(8uBŵ2$ ' |!A~-E4}9*-6x 6yBфUaFQJ R/ q(M+Z[\11d e.N]lGRHQ"O@UwN.#&_n^>.!XkI"Ti(Ԓi,q(i^ '! ݽU # \r3\ӮdZn)QD%7&2:(1tյ>}XW7'i @c'CnrlKY۸vvY5|B6ڤCKƐxNb REz _FH*4(p&!(C *m~bH5$tBNGj*՘Ch9u+E/^,8Qi:CmڻFAAM(Z% 62I"5>a) ڷ8MVXQ4 qki;sx[v#2Rr>5 fih5Xeڥ*-K>x(#iX$e qq%m CE±P"5iebQ`AvouSo̊lﳶY4KPGknU3(ʡU- AyX6X*>aGN-e\F@jQQ2lRtд[m i*+Q#(jԮ90A3MԯʠYX}Քo:bl3 >j~3zRLHӮh> TҾYS5t[L*㦙St@Xl ҁ!k",JETi$F!₮|eAǭÀA)VXmkˤm4R@TBSBB):8Ii`hvJ:JmeU~{Q{&F_4.bV5!h:Srfũ.zkx! @{;ijC6MGtXUm*(St@zt%vŶu{j-^.MJJQkZ=ז78$g?{Ƒ] 1]{ޏdc;$k|X0%9{mr8ԃ#=dl6OU::kIk Ah8yL YV+"1۽@VQ=r+ZQЇ|p.#ha!:38o{ ,ir@EŌUDZ1MDr|T1&PT1yQHa:IB1M`ۀY;lיx+պ-;Zҋ)ܩ^Z0=OeڦG1*t76B7#+J$W4T*#d`&SQda֣2G:#.3(Z R|$ LjyU,C Y0acP&>yt_V 5ȤuՅ:@ 7uXTuJrA#+PZDbYьmC5YK1Z1ة`U0~~tϗ w,xkD>"C M2f-B&ȈG!A]Jr 8j"n9у%:]%@_>cLG>k ) 1E-`Jɣf !nK)5Y$iNJ@K^Bk=֐h U3ڄ`1boA+}⍠0%-@Gص? F!tJ35)JL T "Q8TDYUQõ-`QypTBHcF8&gو^ ʉ6Bbn)wH$Q5$YI$eopmS饥KoU4^"n$BG 4/tВ&ZJmmˇzvWwjڵ4˴bv2IW5 Bztqf{trlѓХE4IvIlf1hmT]kM!J՗yh4vM Ƥ=YBϯoyTbpPaRDlsE6G=Tʍڪ--h:)Jj]d*2(eFAj0F-HOOdPJwUqÖ Vc7 ="($'Mm2X{iP'7 B`~E bX0j`R+JQYTPcD&7cQGQ1İw:Hc[sOҕȪTqQc@sjo6t9 EuԬUP%HI21KCR 9Ok{i<5^h5х7[A[_%pg^qE D+ (|<U8P(-l4EP+Bb~)AH 8QT5]QzB%aIPrm⊑4 t 7h zK{\4*fX. b!C1 PtӂPdpN %>vR@Ւ0[ Y\ yۨO]gF358F'eAA(EyXo{Ü,&AC)zO}?E'..t9E,RGjro /|w L8FRl6M\-A.'M%f-?N6M:1*覵`@74縜|XqgŅT\tmRLjt/rڿڿg+[~ڶL)8u {ߵz殁"t|#>F׻* *W1&5 njx,'z,N Bթ; ]BFBY{W]a'4y/0G ռ6cb@šݬ)Jf{=` J+O$8R!5h&[:Z|ņ;fmUl{ckm s3>L`q[vDb sk75RX#ZX݈5CϑAtNWwlC_\c=] IЕ]i_;RhՈ8"Q}tE(fztEw`ih>VytHҕ!*=""v<Ѩ+B\ >F2RQ+VVSc+B+ҕ&:;"n0"nhS! ~1EC5?dc,W~߮TaM'vG[,Cu˜JUWEGD=p#Io?we>|{kd/8}x ߛHߝgMD޹兯 3}M]vUg\YFA 9͙Y{|%b 6};EIa_-0Y|M$[m~/|9 l:d8o~ÀիwZQjjȍj6׳]Ro,>s=$wױD[&j>Pv6\N?tr1m81sM܇:b671&nbm>z8Y۟_Ͼii6x/kYp8Z/Q*J;l^M)r>\Wa%rOZ{iEGoYRQZȁPZ\[u(W륊N%Z P+"ԋ:>0GMsCofQJJ5xbjCWR':H~旇^j7ol-$ Ø5sz㫴~;.Φyך}} b}}vㄤ^8XnF>gߤwJ쭞-Weg;L tS7J|9--r=rb"Q nZ8⾍l9D.UEUFQl1dҺPz-Djڸ 3Gam3101<1DpO9ԢM2$ 4@[>Vc[iDYS5I RA^HQE ŖMl2E- vkl>fN˶Iaf86/K,v@6fg_=X,"KnII .X+-'Il,ůUoּoCg2g׻Թ L7y:,!'ӻE 17VJ*4QYI'0G,/x~/诓OUˬf5FQa[W!c/"oBQ &/xFp4Z I,A/BAkQ9/a4ZE0E4L>Ѷ{/Vې6!m |S6bm(>kfl[O??"ɿ=lwIxtf[({0b}R70.F|_j___. uo,v[ydǶE4f$nm&x~4:A[+KO7nR@`RUp~^=~"!pH5M%R8|דټ{Z4ՄV>/ҪLi+sshe~n_;z6x^0"H0GT `,Os"a`& [r!s>BWzm^\u ~3?l~4=[s E&s< ;,S7ɏdECߐmFXQX`0ĩ +JϞs4/dEVoܖԊaM^ gb+I26q(q Ƈ vy՞ǡ991OY 1et'*'h hBG!$'\mWSPfIIȨ-@06@hduil7琜^7kr?}w `l׉kjD;gw؏~bzk+]r-;ic K_o `s|qO?茏\q:cz. P: wt13|MvZ?y!Gc&d=kQs**@%CqD?QZ mƄ.:8a ? P$d2 A{_I GY فRD$+p,"2y\w]twRֱ(@c:JWH.^ijoZnmFV~Df]in`3Ny6!U' m׿ſV`_jF2:btAbf"M.-('gLP+'[yڰͫ Ic$1bi  `IFTT(gХ5CL3qOwp.0BOߛ~/{{V#&Xmh;JW{aRgRR"L*? |=[Uxn99JLN: vw:#EgҞ  m珲@=DWD)]@XMR Jq$B֤PkK^ajIM " ]u` jP&P #gQ [J6@MO1Ҷ; %,׏;Ey'}Շt@'Ih'C)DRX)Q5KN2r>ɦkEؘ8'CR)A5&=&┕A'yCYALC RR 6%ڙȺQ( lt[ Jq{V Mn|cZnCkzz#lʂI.GrU^_#3] !BDwGfwF@Z^SIK I\:Iat)Fy/)Ho ޥ@Q;56ͩ#Xk?t`݁V= O/ k40$.fEW4bTR9dl݀;}\܀GC0/y6%D6 b: "$ÏLPTډIi+l'ѻ6ڹ6i6Wwe+\?,YD|ߔ4 YԖD6|F2S!1a@EʢM`qL.suMȺvr_ʃE#Z#D(*ɓxO r_,X,gcpntiy{.:ya}9.j:+rSF__)$& ETSQD!WR$u>:b )E mSL޾:qZ͎-I]U9뛓tDŽ_yLxw*?j! fUPx圢A $u&gb\$cEbx/FwinUoV 7q ൒%G޾yias aZi:7 qq!Hk;o :d1͏I'(Y|NIBid`D&Xsa3)QXtJIM!}I[岕8j.OSs8wj6OkT3Z _d=/E-<`T2R LVZ4 +֚8$d7*taq.uuIu#=,Y h8;3hJHT|a#U6 Y;gT vt]zkV )Wc/cd-Pզ. f *&cQЖHη&ql%~'c01Ek7jml=iIfAp2 x"[yrJE0mۦJc]H$P[IgFDHVee:?!ب.8¨_DVx66}qFV#5bK0- $LR*HmsI9BV)HT6mݦN p#:)UDCJy&8T+ns@4uWY/wݜ,9V/zQz׋;)5X dIFW*aYIDYra^IfX}C}>< kx!8.G?Qߗ5<޸>Nxͯ>sY 3-xؓ"Ewu2c0EB&3s =%$@hUD#reIRj/T %e2Ip$-Q\Vxu櫹gsVyiONXsɽxK]B-糫̽E,&y'N;`2^5M`\`$ L\m3YN-TՈW,W\|^ .G-_]ipd^\Qpm~e; SZzRȻ75>8s+;KU;}y,[E?Nc`t%,ZxPUT$3<ڛJɀ&~~$yy{=H͒A߽ w)3QĮtwZ OP\bjWkJl5kӃM&딾(վQisI*6H0+IQ_%fbгGBLҾN`,F sPLՎJ("A`}D%niLѴ/43n nhė<T6_?=(.Rww]: Om0C28 Ҟ6O뵣կcn={ŷ\\Gpbj%'UbrQwk4?{0;6Dّ_?\^̿xs1qk9 r:|2{گ`4vy.|y=N;YMQo\,oiS`}uN,^tuⶇNxMwX=봮慏>O^r2ơPS,jLm[Y*1jܘkimrb;^BOUQJz᳐REd&Mٔc"Hl20nXٸjO(/OS;z%4@H%M, O1sItv ^!]i ߄CdNx( J?w?]? ~\$YDeLNK1)`%fɰ˕L:P2ѣ1!o׻zU|287nt{?к7t]R./NfK5{t;Uy:^YcA+H2$(d("oZ mE z0yiJM'6{) 705K9 Yة vfg}~</ʦ}c/ֿzxzN1 W6fiv,-X-]asQ=\'Q=zyŨmzikt6DyY%l ƨN_mlDo% qvD &ڢ&4 c(\9\{s1x$FS/#u<>=;zWiG />eaN"סA1$.#sd=ygd9jXFn<Yo{z3jJENgyiw+lǛ;OluT}lد6u}{5`=Xl^>8!j8^ =qRqFāN܍J`Qp bz+-k Ih0 $$t:zAW/R*NDuc$t`K(G n"Q|  &*Z!v>()gDa$IBrRH{Wi_)OeO+O7:IpCh=w)DSRh$JEBԀK&٧쓧v¨- YBAA&F<O9)G.ԢA).ee( ԛh,hb$%PvU{W"‰P) ћ(ʭj_0_lҲOEʒ\hM K1 Q fx *$Dz KKDc Zi%g e7<09t0 HϳM7j|Ixd,aQI/a-mya@&E:d. 2I3 tu0D ÇLQi'H'ȈIBѴGöh5\ns&`ᅗYu+2h":.l9Qk#)6H\mҙ9uRǞLɺ<9FMB^(BqIPMq,sgc &븳 93 jh;0TYk6L0oj6=:Ⱦ~#gq3}`sJk;ӻ(km>N#vŀ26(jt B9+{zH^$nsGdݫ@&& ETSQzk s3u/$nNibhK)E$<4bsh`*T("Jnj r"Jk̹8ضsFc}>gP{Q;7/畮䳧OP @dp;٬v 8YpjƊ§il9aӴp}zA~0HNk(|g7qf(h&&!MOޝvh>&ZK dLd`cnT7RZu4Eۢ;ICٙLaV"y_ 7hBn a8yQ ҕPjNū I&K7!2ԡĜ h)z Gޝ~}K5g\ f1u0DaBoLtd8huSHY7A5hO$b%l^n (lcl\Pv"T|C4u!BxUD͖5w9ގ6 q j{MNp.dR|R'5WN-lP!x"{9uh>8#mVKR{MsHI:kZhe8^UO*w%6M']Dw·50[~eŘK"í$&@ )cRԳ ʖWL(|+Y(Tj%F]rKHG"ŒKS]}$74혗<<(/bJ [rP)oJdu`,s""d []nմ}Sm)grB P8PЃ!}![mwo ꏳIn[>o+[A"p)hS%`-2Š1IymaJIEYoIbC( &vH؁2@Rd^!eaC$%D.Qv'I^<`20xڊuI=~~-N-2eK6f|dUUgEHei6s  7]d&0"&DHA0Z?nPcB<٤ơ+fYkuri}_yVH-^IєPWyhOX !ga>t~ۗι|3VÕD~`s/3w뇙A _Tq.G=3jTA#vrINΔrb'8pۓ$M5%c#@I8o_ni2<;+(j7m,:Mm^ΥHݢ~mX5ﯖB>tG%!uz \`jNY8z9nq|QGkΜ;ٴ&fW~/|?!FkKb[oٳfl׌#hմvopq i$iGHaX07X ӆ8 `~y =Y&g'QliZ ${2.?>w}5Qdѥ'3:=)_u:ϴDux07X6vl~^zyʋx:aqKh>;|/q:i"AӋX'ӇPjho14 /9><*5q|\g(q "Db#>xIC|;F.xD30HDi6kJQ:&2$,f"D u ju66,KUsohp8p)k+%\2Qwǵ&I 5I11ǡ=)rsP;;|>[&t޹ ZqQHsDUr`ʎJn&cЫf*m~Urgn{˾zl;BϯRUl$,No.IP&s*FVƆT@6 C6QŮQ윐BZ޵K"rixD u "ur)\ІHo$с>ipT^gvLoһDǔy6?zKqOTqyv.!Isn|ȵ{%M~n>n X6D?t_i[ۖT`k0mb k@) &~Vs=,(*40,xXT{S.Ne'-=+#L9.h-Q#8dŸZ#8*RDŒZϲ}5FXǕ 8..o%:1if٠5opY^_V!ǻ_S6&PycTT*oYs\ ZD>Qo`^ \NHL*kZ7dEN5B2t11H*TMAKblJ1YXle* uaY[YWY(mzqCڜ=v&?Y~o2};"ވF('>‡(+%X`@59Imi.鵦9gQΡbK%8 AQ &10L>&.DWۍaM<]lu*MamZ"؍:<d>h+ 1)i jQi ٞe$!yPCFdHs #"""EDsAb]H"JuTp1rZƽ`<Q")+%"Vh"g3),Cq11xP֗RLD] sT9sL&/TQwfzWgק-FGS&}&cGg*%i&R_殺A%b_ q6M% Uj]Qm>h$'Z5 f "ԋg4M^-cRRۤB` cZRQJ*" d!QCޒ19>ٜ)†R&j Iiʫp3HTڭ$`ٺ9)s+WWB࡚/%@ JuUbUIPya`E G{ROy<[KlH}♰p?Z< [i R%ZR蘌qqk_H…<+Ds(TBS9 B,SQ]C|b)βQFGυV()5; D锨ăCx -*S{ٺռ'V6qo'|sP'-{#$0$ƿ/d O^h^9XTB­#8Eh)|7_sxo-~^eOm"j,\;jCAP؜jG̓H4OeScpu K-{%%Iz_ef||m %bew5&d~Ǿ!ݼ?^]mrTzJ+ AW^!VKR_]"*Gmэ&kgXf;:%,pMM&CjxXR诈>cNNgj!&d-elH_/JVނ?lŞþg%k"B ^s ~_M ;)3`SOiK9ϲ1oGd)u`Ifks׹ w>ew[Wغn]Zope^74e-U]Z!joquIf5秬'eÞηtj}4 mdAWbc^ľogx hЯ!HATrsFCv6q0\Pa`>}P*=X'q|YE:V튶v YoMf61/v[s]N}[ﯻӲafǿGG',j JNJ-E]9L`$TV(<}qwE'_qZկidǗ? C'=2ݹב.*&7dcWLIhE*ƳxS7~q$570 Z+.g8YcxOX?ʧlD3euo?3mѼӭieADta8Zk6|Abd64"(cV#I\_)8'uc<KWe{V(w*A&'D)M <ՒS`.zc0rź>-qrs.JWd&k$]]WY$]EpeC(%9'):Z &Jأ@OFΪ񶼺0u 6iYi% ?RLzj8Fwm<0]`騪s}1rc^!)qJJ%)Dβ4ѭ7I@:c+ Tx㩎F& DphiJ#A1팜u+MPmaK`,q%4aj}( R(@q%5OW`.F\*PUV=T*vW_B|n&~$Ke/vLvk DqEGap,n^dHEk Vqzm|]ؾDPiC>S6(=6߾XV쫞}CϾ Ǔͦ& +qR~ XKqd.Xv͵)wI䜻!-ئڎM5v6\X !Ar28Vp~rl۲͘%wjd%pJ^kfE4 ؂ys#r_S4h^W ޞ/mX\)cPq.9*~h8o }QRVD |̜RǁD̟|Zp`ɩ~F& v0R+ wM.S)Q59dx1zZqT={'!Q\mTQ6m ii3v4l+OryaݮG~XBмt2DJ" ^W<}FNm%x@d~is?_7-^jqLM'yzw,jkWW3o8ZhGe YINX'T%@tpt10u_zZ0X<Q✤ KA:P8XcnB堟0FHѣ w߿[EWfq{]mx󺿏@M@ݶ/΍>ݮZ4 bɏ.ͭRsA0LJa O*,DC e@cu肏${AFi[la=,YY| c$o'x[ؕ-ߐZ\K4#a/q$3yGaY3QK0Jg)\e *;F6A܈vKYWTf_z_˲,hoЙ5NƘ g૪R6g1"E祕 O)(1D} L! b|o9c{  )R'JzURHڀF+Jhq#DEHéNJr8i}ug۬7:`c ŦV**Qj& T)T9qő۟hZg#)W~u^=Nfv~R# ===Aq˔PWܵ'L;8ϯr7ů?). JEcw Fy)hnz$h!{Q'p=6q~cT" 2krVI)v/|T2 ZrRMS];CPUa?n#.0]QKcj.۳i LTW^W\6\fu"~ R[ns`)73%lrqi't kn nJ> Y̧p4Uojp}pu:ȶ^ƪ 󱎳eܱ@n8k݌c1LEg3F LUr{oݟo*;p6?eI]*f0J"L)pY'4QQϻTUCȩC ˅]_& ɋW/}*?7\P/^ś9\IxTP{mE/[ OUߴka r̈́?ܯ5w5~ݯ֌W!_@grT/^Um{z7_ԏģt$۪B בC-Fy Vd@(qI=KpF,@J[Ɔp)>q@Z$JIf3YF"ḖD"x&)&8tNgw:w{ygVvmP;Ļn"}x ~LTMA^l'$XbS$"许/ Bw: :k֔z.hCL$ԢygLDۖ^Diϊqnm_ `^*.Kaw#z&i" hԃ xҡ$ REe@K!Վ;~sވXoDflQ&-*mA$ q ,p)(#˼V nWueͅԥH©I@e$gSG'LIExb[:#aJrIBqohM-/LKeNٷNcތ5Z'u4#|&)Q'K|r&S׵~B5KU* SZlh &H ~~#|T{?1Q/fBI$[IuT[8}Jg6h^39">-B)>7Dvo[ZNJCio&tՙvGmnU;En\hp3фyp"8xD"(Nw7TNbd4Jc,VG#$! :EKbLx-nxwF;^p8Im}kㆩ]VG4K*bVhl^YG dh`J $-JYjIy N!hɗP7( ^(X=MHV {v&CE .ʃu6i-U1HL4Eeo3 RHF%X[#TɎhg䬩P{M'~#WWAy8rPc!9n,n(fVR#BCf $QQTۍ(r$4Q$gږɑ0ijᒸMڻN=yxbH)(hw'ER(Qj'x-R#9(&1beE&vEM3A3'ZMO8..%=:3=.+Ck_wY|=g_U\ԝ䢫OrulLA6kKɞ8^5UE5O{@Kd@mUb1*ZAie\HU6REAOBK"]d}GKRbl@ϹH/FjlGz\V[bvƒbD<-`N۞ ^zk_hd2</!rl3IYH$Q" ,)uֺqόZS1<2BR4DCb2 . p Θ2Cckg;b411}QUFm7`7l\Bb^93R6 ,w:j bfj%g u4Hǁ,db@jEM1]̇UdHFu2;jlΨ/ 0M_k}쉈""DZB;_Bd \*C Cq3L[ݪ,gT:)dm$H.8b@ I'͍$ E k>&vD|QŶY)1:}qQTE>​vI͕ E4#&8)u̢D2M x6VaŃt:CY~xxi\uHSK 0e !jUgWN-RSUurz~.\FȬC˲M5`]b* @B*~䁐:#Ǐ$1/5AV2*Z.GEJ92ܲ]B,_>d}uYwie2c16'^cm_Ubn K1IEKLqb%V^N֘Q 2ŸY]oXתB}\N4QVݯ:of-Vs$-~-7%-@97.哏#ZFT~ƥQފ%գ]Fq(u5W 7[0Մd}S>Xb𳽤&{99ы|P/iBAh(!v=iR8:!sHn@8ЇtL]sΨxe=EVe꛴TNPj(4Cnk (*"dNE5C|}:xOr6T2Z[SxǓ .BL9sYF qsU0~V [98so?mwY;@ވ)C|"s~IWp&bFeet p ` o"H# SUG-:)U6p >`t9(TMkl ˘)F PՊ*-= "CFc3BRc dYz\΄S‰JM'3c~^f_F?]vLBbLNiPv~qAPYmzR6G9'sӯKP\# hlh[MLjrW뿦z>'狤x:dDf¬MVeebtK޷~%2 !~Ou~fBD*@bYE@I+"DGo\(z(|-x-E'?nGqmgy; Dž O#*ww^[Yhߕi/W{YTձx~']=7o(Z;#] _̚SY?YRa.[\ nhtwetyzvˤ7c6GLa& \}M̷O ka;#7ҏkeTz&2mρHܜa_{(1L.BDʜtE+W2hi~(rb ݦtL9{XK/qYJt,2i۬_O*ieY( "",2(BH,+u0 A2a7m"h>؈Vz Xr-rghT,WCɡ*%!cBwD@9wGzdz<2]OPYݟ/QqWN &2iKU.nDe-zH" 7]4Zd&:,Ϡq?Ogq 3 ϗ<DwF, QюD HǜY:E#4 O*&eݟ> p1r7>0fpa] >!; AF-k&o[=zZq(dhP"E-q.:gw!+ ^x5t`w88}$Vb!#gB1.q$ϖ{] Tpƙ?T9BT4iޞdpͺa+L}BV|4OywW$翌~Z..J͡?\v8Ow-CV~ߏ|v>:/.G>A-;[ŧ_pѝ!3t_O )g{r{iufBhܻhhVZ8)5jn<0%nd88Y qH4Ж~wZJx>eiFn>YӉ~ l/<ໜΡv8sfKj2b \g<|0E~v9kU}:3]ܰ}=>}>_o&۹(;}=*97=q4 n6!;fUώt.'ݳAߜ_|ns `^4^Yl,kwŐ]=1yb;hw, ~sK()[4N$t,g,79zmtP>@Qj!C҈.{"mV<B&3B] J*dV{)R2W%/ hq~jw6},Mr&1n-~,j%1_}ZQ "a礪x{i!NfxRȡWB)uEOӼTUQFX+,4ѰL?F K7n_o} t$g8*^ (\;% RIBF.+i{^)b9Q7R{#KMh'7|ܬ&%. =5&Cz"I:&f9:Tg")1i5%k}2Pb`62,ҖSplfDZGČޢ)D6wV(}Ь!m-u11"6 #&0E^#Ȕš}v"EU*qI^u i8FǤc5TpVh\h8X'Lz)ՉEڥC,3&;"',!{ ]#48lc[4H*øەa~w]L&KQ4.GlmdT7`=&i,_ ̍%A$#Ma"+<*:9! C,YK=Qϵq<ki:Tl.4`訅aٻ6$W>=UtUuu7w6%13:`PxoR, gT';i3&ؚZjpl6)@55s<%>NqrE YXe X5$8*Cmm2IT99r -l'krFOƆTC*.H^$IE5R;G VWʴ7p=)ȁ!p}v#t_I6fXLzfHuFoL}) wrz?sJf*ˌT Rp&xdRKOV/6Y!2C>: R;`_G.=*xnӜNȉ ;4I#v4ώoתܿ^^zRU܂ْLjqМcL*'.sC+SjIVeZxd[OOl[hּoDN4v\CWW]ZNW1=mTS}S&l\Mmmp| 6_IQ\Χ!h7TVꨴަ]IilF!erZhwlfߡZƵ+(p5RޙTWSJ$уd f'e]geU檮Td :ZDl$e)?ݝ]c(,*%YhUEk :T-(A! PJDhG4IF4pل?9ށ* oŮXDؕS`d̔BӄdOiB*Si W%Ь"JNW@Wӕ}9C `]/rx~r+ZnJճ)ۂ@W(0b"O+n*geߊzS + ZMӥ]eNtut%BRJ;]Je:ARJItE+rbК@W'HW@)U]i*9)n3hڡjv(Xf\Iv CWWA)tEhMABi@W'HWzAlb9dä~oB!izm!GB wn8g8[J#tA~+f̋v S; k+*+C 4HBXPa9 BRbb tCCWwI]k:Z;ڡێ]=w1W]wlיR;ZO+B:EJVwE+f5tE(:AR9[])ƻnW]ZNWȁN4ЃprrWײb w"R tutev\CWWRjw"C$ 8 +Zj@]\#K+Bk{"n+}OUc'ժa{ԠuTvZIV7ɖP+!郐 4E5\m#J8gʁ{'=ݖcDA6)+HКxJC$w)M\YL'O_ ]I8Kvθ3M ZU. %JWd͵]"z.Y]!Wk he't2tleMIt(h˖(J $"`+ e6eJI)+xW;]J>NԲBv*'NpABWlS+#oqTBVbvAKK`P!|o-q;NyonC/7N;C=٨?n h7Fx~V f0cXv,#mfcaI]N/v'> w>!i: 'hC ߜA.lnK[X+G_?_]&J?o.ܫ7khH*O3l{Ж⻛׼9͛7HX+o$П\k%3eV*{Y)|ehWvz}ɷOYbݛH|fkU)r7|?ל]~ྒྷoe뇈 u5ݾuld?t(f,>ġ/jȉ'9J⻯!1N{R\L"2C>:zkǤ0*\zf$˘ 74y?јѭ .2W)__euYҡ|6c& #ID+:*+Nf#kS(ic8${or" yeq# Ul{hQgg؍FRsַ^f/*Շ?j}="ϮwFٮ:v1ٶ€վyMl #OlWK-6g[| [觛w4exzN)J_>;8ٷ3=rzZ].scZ"p~%=h5]63?5Vƣ_07a,.He۲Oy5Fw[n)C }6k2[ #1-FXXX"j=ǫMr.$j*5ʉfddYp)9ȳU[<dž pv$Dte0ڥǾEkV̊j1#SLѧ䌆ĤXG'I25Gu"]BgED0d<8 .(cN9[49pRdI#;|Ge/N7T$~jl\-6{d YssJ½N|я ;`P.ʐ]A C#j/m*Hecsm lGfB |\v୷AJMB;ਠ cIV˶铥\ Eq2!7RY.'B 飪kHl /n4jfbfh}r C,䄽ډ| vV& QlLq6&Jp]YyjZֵ0ynMf {)UY͘lG6ȑrWׅm"w׻kha覜[msUݓܳ};n[lӗ_{ײ}T7ۢtgAC:tI.yžGJ*3s ITNr 9%6qH3aY_Ҁb94 5A;G4!XQGtmeLBR>!qƎM?3*`2 \210`^5;t^{Ũj :`PxoR, gT';i3&ؚZjpl6)@55s<%>NqrE YXe X5$8*Cmm2IT99r O1LЋ@PٴOB匞 A[ Bv 8F"y &H(G?YUW` t=! ]Ň\r !-M7zĈͧXg4o[W(/̐(1%FFb\U`Bі-Q7>;kyd3&vX ͫ1d0 \*)ewr>uB)L>eΤܛ^C"[Ew>NVFW]na<ǃso,_6RxA?{VcR$'3ˤd%eq߷xt%:PVН>G<$}ź<-> 2$1> d1;P L:PA1  "dCem"-냟Ifo)1& Y$OG|_EWSht<Ƚg(CS_A} leްϨ~<5$h^՝_l6jEM&͂2%F{'@ 2i)#:!9A> N@*5肩`Re0.{]!=@=?vF'_ɶ0)HqAM$ b$`k1RimirŘ󾀴0(="RQ1:_9.ug5twGCSW{ËCRW:t/לEgP@Hsq1шIA@',f*~YN'W-ryJ.:%sCQPsG! Ws.,2&'JSSdJ&h ѣ1!¦SU1 bL9#^eTު*YbPo ?ua1Pwv>tO+\x>>;\v_i 0{C/1VaA:! U!Z' i՝g}r|>]!+_{=|W0K-Dt!-Mt4w 7y>~Zps`eDǠ 1P7B ?I}iO(H`̩}R"`A$(!(_3: b[8r9[S (rRD!F/TE]0x.u&ZUpfy$|Pe6'3n-OO`˯ 39L%jX IRцx}BJJCJ'OHJ8p:VP36JP<ΟR=!8FdpP)s6vA﫷}wΰ4wq< }PD5['O6JZD=٠58>TIUoWq5yI@6@ϯ~[2((@MNIE/ h /skŋc48_-X@x`QRɄwR :Hi VR5c<" o3L6[OՅXAT{T"ぢ9m{zNS̓ j2_|;38'T*Y;gTMq*ʢ+ɯZCͪr%{ @U:k 9V!6BrډUwk8[s_vTkm gFH Ap2 :zf١LT,m I`PhҙJYUY!Y!b &مgͺ]~U*ƽ5"qFOӇ;PaA5_~j |/ wXx=nw;x?>ޏVίUe^0͝M>ΝiYG*Y_tƥ]w> S@G8B玐~!4b;r|R_v)Dix(FE˗dQora)E-5"Vz!;3ug}t!c0_w7y)X~oGt I%Z0"fшe2hKAd%#ކ2V-.dy3>VI+kb٘H+b2P,ͺ]רkYLr ?Ut”e<r-ӏyE:nÄ.gYdo9pgZ4*ro579nҀZxtZq] S20q1UϻsQU ]Q#+c]7drZr_-t~ҖŶXcl䐅䕵:2$iބPbN4=L-fM—󞯠ݓG5~ =Pg0ovx֨}SHYwAuo$K(x.dR 16.g(B; *>AޡI Z!D(Ge8*uwW~0~At;BGMݺis@Bോ,Yd֙M>AX-E$H5!%NF+Dlo|?[}20~(>g"]V*/1wXZRzu 6ZyiZ(c.4CeEX!eLFa2{(4kˋd &b0>' JYSB`D% t$R,)-'l$mPdY*%:'FM)+CPEjج;4 wg:=3>9 وV 7qn`sXybMgG*Ţ㑆j +lAi)9 5J2')r.HJuoV̍ZOm{;ED_;gt!p+%끂 ytR:XզJwz^tK_Ppi^d=>_A"p)hc%f: *"P]ik_OڿArHDNGF)h0DF"L*$R6DRBA SBjq, B3cV>)]3ArO|҉Q@o+c7=R*P|]@?^/F6 Of&PVo͏M , U6j?ߌN4L~ȣٹ51W t$-Z3(i̫z^7p$@Bj-5<Qnym/Zk$,~$DrG2jXlᙹ=ZO?Ls}׫ت Eտ?yU-_8..h|vB'V~|~ݿ뗟wR}O?>'LU ua-U]Wu |?^Z5hoZ!Z@.Rv4*z1_a|П;Ï\]4EIsBM<QLY&EE2 ULV*Bh%$!gC5[Vʥ}48칟RY$լf86lK9 2BHQ\Tp8irW~3|΋w:k_ûe<^hwYMJS1ݤX%`3ƂlRZdWLH+*Hgt+X9+9,=Ub/Mp=% VO sKȦe͒(% Y(f?Q>YI'*hgЇP@!)Ugu>_eEK̎?/IC6<7ϱMp&SN~-M|E7Ev8XIxDuFM$lPAaaQKi4 ,DP63Ǭ6F2й{,'y4t U+<5a)9@ )Ar Joa .+\Nǜz},ZUsq`͉吮&jN# 6 [/N8xEQ1#Q<)*d¾Umkmq}y2n 89lҲ m ~0!") $ um肎:6-:6 :6mIw --щ>&a7T M9 r;OE!w&Yf댭h.P፧:c\kB*y+`$S1rvqp/7\3/xVj-=ެӇ-4K=d~n`ɮ—t:JpA ZYe«9"'uu!OyAbяƥZru*;-JlYrp*kQ+AGݟgq<\ְN^rȅ%ep sx0!/e>+O{bT ?(cD- >{C3c>$ϧMB)+eCu>^1S~mi:}i2 ꅪfU;pRg]z?Sovv;yD9WJaE Yٚԛm;b%ɪ%B&*OreEHZD.@8e}L%܃C^tBzȚ,"Ǥ楶IPyE+HYH"kV|/L@j)rؖ?t>szRt&sNܚ8|nNsZhhI=;e9MAGGi>^qWCH,ּK\)H¿ā 1H?$ A"`Q cۄP&*q JCG:ͦBP(znl $ (6 L$xZĜS)OM4FvH*C8ʽOoɽ z@!BjBر3L;S vLr=Ύjv'$`CɈL.է"2\]\e*Au+z/tpU &v;{;č?9N bt,7Ƶf0C?Õ%* BԤC1\WU{=Z C~gů#,3&_>l74(%TK;ec?%wH,8 Q 5"V{~q. XAv{k5ߦeD ι+YRcl VPV= HR gP6 <0>B)OH~2 `/jHrv&S)DѼFF)dY2r/ߗA5E;Ò}FT` zD\~Ax_ /?GYӀyS}GYhξ ыoXv`Q/0e]?gq_s,\k;ޞ[ %\-=j ݶ\sYu4'a&ym$ֵ\ۛ<G1A[:u .~'<|1K`.X E`.X E_>X,m@ʢ갿bf;D^XS0+% vDA{҈=efk*۝ٚB2[eV e~5[tDb<Jge7\})lGeǑ\Yњin܏LxѼ] n9*8 .qf['Rk,އDqޥyJiO"B31RHpƃjW4 peh hW#gK@6qx園6RepFAH3t*­TD-`_Cj;V "}$4T')U1fZ%ka1$UE|Un"۾ӨM<T"O9 &( 0e7>$4Xe$O)IZcsb'meKcPg|!sDO8 <$)7XaR!xB }OhT x - yJN &VH |$)8<0t4jtLsi!h[DNl.sy3B )bA*'F!pc%͖:o:/C Jk&DJ 蠵^lA#g_iiBv!HeYOvR4HÝ3R#!idG :fy fiAu6l `Z n6a FD299!k,`YP>0'׸k%v0-P/JϴR zCq|=ںm:d:ez*&~v}s1-tld s/dS.ek:Dޢ׀y.&>((jwYͅ,>6~ h*`! ,qE ቃ%o;a)) g6̀*pP ņg5SPx@, =+ yF m hOA4 c4&FAX $o]L ؤ`&>͎OCͯϛBx4_|爝o$ @ $$f |He(Q5Y$Rh aȬ$hT=O P3DCb2 . p Θ2CcێVsF0[*fWPձ/jQ3Gu6df!d1Z O);Y#:j bfj%g uW%#IҐr -e"F$bbE5G(1Ȑdo[kxX+Z{""vqU^="6KaǨTLKeHCa0^HQGsF jbmr©$ke3!tHP$MZp[eDl9;x%@8V%[gkd_\-"qGCI͕Z,*iHQGLN+&<G/TŃbWձ/ʖPMeͯt^OZ׻~zҍz ~|%G'<쓥w=$5׽@*O'-T>[>Qug|`xSI hjenTi#PDJr1RT)~ۇV~ۮk~{؁j![=+HTNQq]Nܪ!@2BI:!emgI%:s 2qFf̵lh9;mςNJ6rVlkGCR7F8zfcyhQz9hP8=$b)2&kKX"9sLxrgwúVQлc_a}UvI_F2x;*Yv[zS$sWsOI5rf܂5UPu6VQ =)a|0M9wmzuUUΩZwоH,_Njf7.J{J3ukw(e?uz!xxwqT7qWki;ΏO8;kWmcGgC=79Ē'UD 9ixJ|0 !Eu mHr˹+JF~i1B۫0ph_xF2{l mwFӖ25eڬm: *8\"b35CMNK>a{W$q\̤a"$UQlS ծHcY4qTVcw0b"0i{QRE/U^|"VXp,4<ɄAG7kϴ, tI,FnE"󴸼茫z[kk1=_Џ8FIuJ+)]Ɍ/F0VזR( nEIB"8Q`e:UAe5rJ<+Hߊ eS .s,c p7h}ltaôrIγ54PL4©j_nbΉYJeQez:+ڊ[kqm2n:((%M6d@sC< BR F?M؉=Wwg4MfKZt>+ yJfc@JbVhcJErTJ v'0˿>ݵ"Y,l$ D=Dl(q( ^} j}bs٢2c2$ᢐ99 K2*'i5qūfE/ku~1i1?;x~\orH?‡FS|=a]r3tXy3g~i&?ʎSޚAaO4?|.G㸴~HA*Mb~g^l>raN%Ed<%'h)?0>/OUg[R2ͱAdl\KoiFv~*+(M돛D:i`z1ѯo/g?M̶4ZPƜINϖc+2 &O-OXSM5j{MMՈjlb@BdZ`ŲO'@ou5wNk]w֦ZM_œ)eTh0a)r<(1'|0]:m-TZSn.Vʏ.ĿX\Oj;p1?` E@=l~~tMƆ5T;0E$- j^D$:nwIRd z,,L3N9&qg]Zջێfڿ8^VZWiQ}N#2t|U 3q XZ-vN -u{w}n> (u_pc9ZΑA1-d@O(#f#mQgl"2w8fx\ݟ_&&N/DZ刂*\`A vjr2FAF4˹dz5i`45y]3Zou-筁\mhK}L땘.UL+NY )6qa_홝@ M*,eXTrwsޫErޫ-}5N%egorFzT&*c%0 /7ALȖM^R).T%Rd\3d P0;k-fцv֚E2.op.&QUⰿt8:#;@ZK|( |r7x>j#c;H&W|o΅^.H҅TGѐ"2 2)D*dLE;8z\89O"z]r1B(ғPedmo9OKVl:]&ͳ^{3#zEYf {D 8K9KZ]O)b 8 8Yp/R`r؂09u2$̱$TJ:jXoLO]v/>n/p18@rQeaR tFIU"eA:"S+\qˌDu '+OƮ~6|GdsZ{ޏ:Sx>̆',`NS8.R*A/Y̒b>ӱ`="z`倁t6q\@Fq9 X_F{2tTxǼV!.!7 eJ̉n 3d ;6;_&>m!zmGURKVuR,9 U:$/=E^ "TEک"~BAcCIt0m,{oʕ?\9U/wo1h2D0r3`6iX26mPpkf6s>o6NF?]MwYfVWrǟC.O0?V- B!D,ݻKme޾j6'Nxt9>@]zώpOo%6:[ fV}mV8\myzv|`_'μ3s ҇IN-ѻ7>+pzS9d]B+\ȆkQQjvJJx<y-$-i A 8%` Ɛ$C֙3R%EWt@z^Tu0lぇ!m0ǤQ>{zYzJ-9I9y>ONRJnɾ^^ g НX-goQ#vj¹t!-3o]~ ҺgtP'S.YJV흴5cMPۢ,B%J(%291n_Y:\VȘ%'+T|"1 gY3K쳗/`ZmΡ-L_h9o:οCj9D.Zo 3 "W?^޳|F%A.G1s7j\G$zm#j*1Ǎn,~}ȃ]'LˆX"RɸPm (5֎7!YVAUIE+՞ gvf~>M8B;}%:o旂d>)h[w:ǃ:LGpr :ߐH],ϏI>[Ʈ:"w2u9 )">R%^!3Tp߱1Œ?c'\AR}&Ie$cuv\r7ᬄqj}.b͠rdW$Վ]!XD*TrլUz :+C[67)ƴo:/>S.ӵJ#/azΚ/@qMnM!Ou<ų~}ݶLLl׻[ןmSR3I=gP|~yoqSN5]4Uu$9t*\GVTTAN6K04sY@,Wu!J;vQu 1e_#%ip κԹTK)g}jK[`g*'\tҗ#Is 1zIT"gol B9lpf C_PP5k:ʐ k=\2jR%QCwyKzlEӡ=˚~Q<:MgW?-~*F N;^q՟xZY-A}ؿ4G^{ۧ!U`AHd tZ6&V;ny! GSb&OY7}QV+ܴAe?w_ }g/~%OJvp p% D}g;jnD]#5_hB9YS;6z䣦{;N؋!^|p_ٙ3POZ !pz $N/WϘM!,6~v'M< = ܑa?'OC(>XqMY{1'tg~; BR6tyha]-`T[Ķ=:E1~i5&[;vmQ77ϑg]Ɲayv9OFAȻ Q6RT<$F;Ls] ߵll$.Ozry rރ:Cۜ9̴}6oy:ϭ1B6fx%-ĴC"1껹}U׈ҫᖯ_y.+pqY=9aP}Ky,kؾzh 3;\.0.zdFh 5";n~raW<>?6qދ%܇ks_chfj=k7znXǙz3C ٿX?`Q뫋sq<]cӳ7n1eh56pۣo X/~//_߃CBW /l gNW "9UC@W@B`勡^jh;]5tG+Exˋ>-} 9v֧pږ}ճ 5D=E2c7]s{hR 9jO>.xa _!!Nറn']P_]ެ3 jBD,"a{kBZI:Kv-wW׆~oP{ѽ\歱tBܜTߣrpܭ.Ζk%޴ZL)OsƄ\1ݬO<=97{a}S o -gZ]o:ZmfyuM܋6+LqS xfT1.@$7(7mfp@ڟ^Mezmw=D`#ے@EVjXf%EFmd2\ $4s Wꥣ)WUFA8zY~::5\ F ^V=Wn"Ze BѴ#k ;6j|*H-V eHZ6Bbb*eƍYR";[Ҩ$Kqc#HEKI9α^hZ2^ӧr:;9i'R^MR)G\YH),eUdX\$"Dca!=lilX2 Y91Zs#I(Cfkgâ% _qHv.$kԴ&#-GXyKظ'S4~Z'&I$(Q9Pe9QJR-Zj~QU/cPB&gv:>#vsa+Z7xS C\ pp N?RSh=BKV` bDDd/ Evbo21[EIOH=_L:%HdLhCR S ^igd U!h0fX0!O.hY[V Y\wP$BjswQVm)$2*MV^OuLrNd#|,5+(_dXkƩЦ mgtVyfD)P,`lɭ)0A]4C AvyczD R q+)f&q]xy߲"0s @ ͱ%T '%Υ"ANŘmpg`h&|3'[&Q!o Q66 J@`E(y=XgҾJ!L VJ ()0Y(J892|$XD50K, Rl alRD@j>2.Zf DI2m ɒ -g`#/hO$$ڗ"ckiR"2 !Vs ZZnܰʕpՕb1F Lg,fmG7nY6cV9ih1PT"_HMa8I@1P5q $$BLFP,C)9q -AhGM;fCd]B2l@2 ~ȃ!2!vGxcaÕKm;:W7F)` (3uhͮbaL^;&U3E-v:$NJˏݬ RJ 5@8iM$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@ҚisJ$\О@(:-PZOi-8@hG$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@rՒ!D}2$qcOa@#H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""HWKB ?PB@I R *I 9$H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""HC+h!wٯZjZ/ϛ˂ci'73 S:%pIU3p s2J{ K(冊t˷L_\=I`' k˚Hk1WOܰ̕|d}80wB ɘ*vb \q2W\ .YkJh<]`}:,`/Y%/~<;s$6R6q~. 3tPW:i -~3nցZGf2w5FH??o?)H)|pI5j.5o6k5uE=oL==X`\2BgƱ)<7?F=N_Y2'>oO?'񏿔c7) c\*mc0Ym9'~/xx c6BunYeml9V;vmsיΛQZ)Wd5PY |bkcW 򉠳~oCz6نeORo6R/f_A-iW9H$Z3//uI_"N7˗/`_ }oL7k8:&-#)#de)"V~U,I1zōLKkZ}u+9Z q/,N,;,W]ZeAr?mΓcOYA?Ru޾UBRICqPGZ-ӤmZwtk\!Ε;+jN#Zw e@Y^ŪEpcKd_DV֞&GU;-+KmiXIȺn!`,Gvؕ5u8!wDֹigE4f+1yMplJycg#4U=*H3q Cͤ׾ZxrdU\V;ӪbJڤ Cr exJPJY)zt 6Sm9'cCPVcƸP{.H^$SY)-Z-#я]8g*hxHCyz5Qh,(xh˼ZP'cXz^hsN?#4H@f:^vA i΄Si{z}Iz4Ŵgg$ Ƚk7ACYɹgCOy)Ȝ X2#w1&ƹ #u\zfIwC6sԧiPC8}v (ԐVo@Wqvx,QA,yŷb?jo8FRJe!Yk*1 h-ډ6 4X;giH~3p>nBܿf;u>-[:q'A@)kI0v沃`ЩdP΀J>Ty1K%5b `\ |1V [/ֶuGM>g12Cp sQXJ. :}E m&Ξ&{@)V޷J)etJΣ:f*q T'$$$)du;-V0BVQ[lfPQ[Q+M-.ƾ/<4fMEE |z2fJim㘜+~j%v\&&e9gٷN*Ψys @VeovTxtEG.hs*$rʸ6') /0^fbg7SFHXlAEV)&.4FdHQ|Vi{`Zj66ⱐ SК0R7͌6md6B̡XfP[mGV+qq01}qE".G,eyDY-$Ke0 Cxm73up]*gZDg9cp꺉3pPRLH!Z:kn$jhHC!'qUn\Cu6%E.rd-5WN-ldIIu~RP4qx2j+..6CLتך}U?G2)AeEpTBՏ[՞۲5z?ΝՂ"iWdPnD <LBRBBĎTy,(dKoM,\Y(ҕУRNA&g-5,ɺg"sNɺ?F=]stg6 vAjz3S;v1&(=4(SWVu9fkJKYi_R*mk oCy:>Lk˖+UW{<`2fYt+nbWLYU`gOf,YӷTΪYUJhWwNh8h|7)K|љ51V]:vahgM]BroD-}_^_<2k-m:J]=7G77-ݛyX|}wf뀷w,ߩ?2N6M˭]w"*ほ3J-ҖT9%J^M$hM>3 -Sq"`NZ%Nk)mj1$'>>#J.x̡ e\B._Mw]&.8YS'u8]p :ӹK>=W{ bxKU$E I8K3V%iu0`1[{sZC4he+Mv޲7@rΗ /'uxRq[A‚&fEó1Fq홖 )s6&nI(,oSjL mŧnOWۼn ^&lntlV6B%EVԭ](\vh@A3CӈԀ*(kK-Ǽū̋ 㭤¬Z)YX%g{8Wi?3ٝ> ǙHq&kuJS¦,k^Rդ( RwWz^C舥CFe_jR۫WEBmڻXY iwM.Ze :&wʣPf% $b0Jy a]QE<𜉨)VpCK? F*#ZJNcRzd J3KF'AccPa*stZU>f۟08HFT3U*)=%1ET=єk*y80_܍]:vHR9$e, q $:lI*:ۍA:TzfYH+Fm iDzNcnP@5yYW2X>%m>jqm\M05&Ό PK`>WY)E=M=Kݼ~CviVSj;Nzk07D (Y gf8"1 }ؗwd)/`@2@{SDhp!)peGwbjF:5+Bdsi~`a&*|Ny d|q8(g|4-33t[Z>ڔB|\oR͛rqRtٴoh l1XiΉ˔ozQilɏʘ귳dZ&WO~(]?x  F)\/q /.z1KƑl<|3+]}'w2z'骭ލ,3ش>'|LNxtLtsx5mnA{mkXh1"y2f9KM!,3<٤ruzQhC7De~~v.5 8a:}O2.(} _! /aw>?~?c9GXYppT H$I,p뿾yk*Cךv9z·+r-~ޯRuW!|L\%em ԣW*x[:%&%FoxYsP"UEؑ%Hr'9!PcSi^G:.y{}wA,&-)jL AtG p\ goB1Db)t:9)ǝx;}x=wYU gn;͢"­WatC`RGSJmΌ6>)o)Xe#׼v. M޽w #Ϳ'w7>tz ĉJa^i%$#/g)Oc.*5hS>:ҳ Ȭ` LC(cq\lKMnF6d:n/8{逼 lkR ·}Wkxr;nA40=.ܕpWh48k U8hWR=z&E&sǀ҂$(/R;TRF9b U`:[.LGyewt6j/~'=B` i=|&%D/&l"WjzD%#}ٯ0l*'R~$:ІwqEs+"&7,VȔNakÏh{ҽ68M U\Y<:ZƵ[QཱིNa AHRVa S띱Vc&ye4zl5ͭ 6Dk 6DvIkm* h=ޡ,;nL췪@h1pS*v h!:-SP鸝^NGt$l#29ϱ8gȫ\Ds8VB(U鎺W+t>F?@"|i6/ [?Lx(@IUK~ w P.v[ >^9$Sp¥U(hXTxa@ypCIư$H,Fku] ـ|u0%lYuXݺ$-.u:S?UR͏[^%[R]E֎MwWWyAFҊ l}.6 'm<. ޳hvoOoۛ[=,lܺ{^ˋpfa:lhy9`1]xn ^}:cǯzCÜ57Z9c^Q ˁU/0\ pLW^wawCcL.q}G^0v&O kuTTdpǜ,&;B${WW4`DaeFLG!@7j strO(ObJ;eeLvڜ'{x ]ik/@xB搝|8+{7Ef<\=Mi&hwy`jus-wdzx \Oki]x`ꈛ&Lev!kw=鋊MhNݎ?MnM+K}" kFoZS\srg_m浈x#$2WDGU8cMpb{K'ⴼ~}^B]upӮ&d0FY Tc(O!)A05Uֲ )?I/z]OPggU尳. 'zkRoO꣣):).}^UIɯoiт`+CQQ*GJ5>0h} 'P`N8u!*'5 NVXFoCDS)x۰@ +TV8`1waʽ :aFhNPƎ!ng_W4nx֪.=ެ4rm2AX$>]VEx*Wlr ʖ9.vJc1F1sϦx>nR >= =ydCa12QzO'KuII#5 4<2,bHP`3ښ@:%mKX޿7SDh p NUwPpk9-l={tLH *AJ*R S/d/AK-Յ>[ ]\J0ֱ|6ui/9>GH(򹖜N)ڞ&J]tt{eP$N!OH<@u/Rvg}5' h #_6dI§> (JB }(3Y5Lr)ulWkxr;nH?gNji;xмh9u727jɤ^-o]6LN@tQhTi~W3WܲR`*S<`s- '1beOLX/]u:LL򔵌r B g3f4j|=#,RFXp4g=gr+@\3 e }},k+OӶv8{\ܿ}&<i`ċsH琨UK<9|=~z-ϗf6YPYaN#gRdJftd&㰇4pfS;`ox*U Uu|Sv{l1. \U+e?-•8"bIʣYւ:\U+$\)-1f+zêA ]f/{D%NK4b B2K.5( oK>s/^d\87)Ycc3}juYJ 46O 2SE!g:LO-];Ws'v{~R%MBx"ӄECٹ$"hҹĤ@e F&E%xk\邹t!e_ YF"Qe Ky_T5sV⾹ >6<}PN%-)uqx:MOWu?Xm̱$HR\i[ ӉE,*8-rB嘝 :I $EF_2 ٮKLL m.b"ÂH}$HƜ"Qvfk5TKV8s;V?sҎL7_0enʬ~zԮMW}kyr>cޞ]~I'+է^HC58o%U qDTro.WR4ݞ^gHBp1x]٤4YLY4nܶfM:]yZ-21$~RWKnon׽C/=ײ[^˽+kt֗|Jd.8OXFHƢd9QdLq4v J;}|0lSqj(cYmN2G#9+eȞ!bl(N\w(8/*Tl-goO VVFؔcDNk˰M8u^y"ФYJǀ1cߤm~}o)d]B1>CdK2*gb:HI( .!*!Bwvi} sE  dEIA V{(eiE(={c7O$*<=NI?V5_gKI:;)?SӭS  48oc,YIs QzjGp>,jq 0\״t, _OHR|@BnVzzb8{h)x{uV{ڰ<4&ePaTdp{95/ ߍ_XݬVYuKX' 0T;8 #~Fq\^bnտJ?v{/[qxyN'߿Or׻?2ˑm&b"@ 鿿M/94j>i6zwkh󖷴:ԺxԯubƑ`4ϝ2uyglYIT#5RPdJYkEʩJxIXF|(m^{a.{C"Y[k@{Jd֎tOٔ,rQc>DE~M5N,'?`i^zp력Wzp%'X#JqZ Iq8Z]OqZۧ8S:ȴeC0):2MgDC-@hГiȧ|M"KAPScD :S$`S}Cb,kH礌Cr$( e-*.dK"L"e] ސMwݓD&Q}r>w!(͇uE|:ϫ|zst24?9 gm~n~O>j=`s{^VFvܪu-m0K` 7 wm14ffU_HUٝq R#0ʨVx@XTڧߧۧE-g綤)$yhP$n*fbLv7Rh|NhlD)X JD$9bQ!XJۉ}[3g#ts;/RWĶ5ur㵳>o 뮱yr^0/GO1߈b)a_-?Ճ/3]Ƌѧ6$anlikj|Q2'*Ea+a;{""vE=">,ڙ Jy ڲ2!ͼAVEdiK8ɟmI!jiFn+!!LH!:&i=4Vh[Qvٍ38mեzɾZEbW#5 H,z4I[ j0 AK|2{({\<.vf[c_<@:ykMo|1#TA9XD=(jDяY䙿Ydq4j!ޱSa4 &Pd]+̧>@ tԹ tԡ tԉxGT$J揶$o ׸ffD(KvZRwAfԑtLAfmR-3U8b@i@en.msf](; ÏoK 56Y9k3L5JK J6Ċ|*gKNva^[bYIa?.cTenoX]`n`WGJN23-i[[Vw$II>*KQ;c(ykG2ɳAlĮ[)GWIxrDHRQ[_LJޑZB#h*98Ce2#BK n)Qæ2_KlȜoz9C2}؟(bQ/ttqʜ }[R QS"]\!$dX4"dxb hkDMve'!iU)o2巟ۿ岞~%ӗ%8J'd">g؀Mѳ W=hz3Bia3vþ5@jq7_{YjyP"c郾AĒM,0ܱFaP~SО.>gHcP }`w5މK2RBQɂBrUҤSnc|;ǹ;5/Uf=|(iضNI3=fJ"9q??{L%U,"C# Xqᦒإ9kb NIeKR\ f)*б`M!r!jP҃mKs^K(ި |JEǍqt|M"h y斦˙;ҶhB/?ُ(Onyq3CVVejyZU-+z8=;ts),LD!d\1:V3WZ$C~\|~U(>.l3XfcYg /~ՊTesVs-ɞ؈|?9Wֺ}۵; Do^EЈ$ EQZ ަ&8[|f[EU~G I?4rOkrrnZ_[`4k77?8?*%}!\bwcރ|ڪ>VtWk,m9jln|ԚQf5[~{z׫:k<<4%Dw}};oovy֠o1w`{q[BhՑ<[nƒ^Xo\9o؍#}7-!j)c;di;rgT2lv\3GpRv7n0#aPgY' ښ ngjw3ׯgpI_^XczGtR\c(sj ģ4ztp ~*{Һ͡4So)KքR͇ySw }E]Oxjӱ$ۻZ~""chyI:  d Ռ* b^J(kZLpFk]1Lpb ݆D *ρd]3Z+JAQetE:0P)Ʌ'$wt%cDڜJ"iAB&Բx>ʞ78;D BGaE)NIeérxٞލ+xfPן;smq #]r+leTvkMZ lMr2XQ@eD<by>MG'L1Okz(8pK9Ku$hm& HRx:DJ?F'yU]' zB'fz#9nPy*5ؐ!re-ڤ0{;6CѢc*!8Ta%P{ E}kv2 { FOZoȝd55Ϥ:l&΁\ #W_шzyl&7 oA]Hfe8ޗUMm.X~EvƧ_xguVT4m}Ai%jNڏޑBu_g䶺xeMeR}֩1/\WRyaKupK0{>~ѯU~שx~\$kJ{y~;@9# 4< uApy;g70=nԛ<lQ xi:mgZy秠ǧލ?wyB~?t5--7NVʀ!;?nqY<OӕtAL =r Bdb S }iBzUR9Fe.W:z* _j@̾&BCQã9x.$r 6$CzurVPʓ6,OvGy5+?ԣ\8xq<#EE?'M'vw$NEbK^2=H'LZxXA2^#䗶oUѾuFfuAjΈ^5WFFޙx f% ]E. QHG|N Z@%"gE`4pUzH7!=;VIo%ISE|DžŁ,g8~pO[h?N8^(*{©8SȡwM푻k;x j#zǤG `6p͒Yvkm$> \'3|bVO9ga)!ŭL pRLѧii45R:s¨ӫyqɃeqʸN$xI9[QDz[(EY hM8̈$o/g< "0˵ebAIϴп4V& * : ,*r!YYn$@N8ΕtVQ(m0nF S|/ĉ˪v )P3`lBQo%ɶIgYPġ@P9]3 5/L<^@OJoCڻpMwdnvpΚ ^| 6 3ߜFBK-F6I p-%lWC(&0lR FEWB)7(XJJJ *J]!])Q#+L"h}v퉮*Z7x"O9(GDW  *Ja&:CBeUƠuIEh骢 Jo*`; WHWCɉΐt;jwfg+◻i!O=hc :wAW񀤥yJt2}З{{^nquQic25hn7Weq}d\m f\s- ):d͟fl/{ѽ]a^,0ܱ"Q2l_5ˎi7$dPhVdxb hdFO|I~pͬ +)4s>V}}IB8"6r4MkGc!d>PrFc9rR=tm2kn/e˷\̈́Iqw\?/-릉MNd&YϸN#}mIEc&xhx!X0#B rN4}>4k;k=lx=ī誢ՃJ!`oSߑ<9]>1]NS{E P AAWzC^u#++1p*ZNW]!]I-tUHWWp ]Z+Bi Ji]`n4tU9h@z"ΐpFDW \*Z)NW%L9ƌ9p5*KW 骪*ZR0pq4ֺ1ϑh鉭ㄒ}tEK.vc]%OuV_c~diH]&.Dudf6ĒJxýwcVjH;&ofLFdS4GͩpEnh=*J;YeQnVYGDWؽ*Xe*Z7xΑ5ňٻ6ndWX|U~ȱRl$q CLR}R{HQcS^ \Pb>5xl0tpY0[V QHW/tEyQsgg CWn(gU]HWZBW\E.PJ骁t/<;(]!`ɂ+{Zth4&U[c 2BBWZB%UJ)  'vpu0th-!2UJ &4B4gr;]!J#]541T3A+KIWLX&CHW'pѡxrV1$zr ,5GtOq *LI}+KmHWӕ(z!p> 綮3[W.h9!g熒yv8Еt);9+¡+{K]*Bt(t@2]!`]!s_F|+DMxpx ]!Z}+D)DFҕ*_:dj>I--()@_~ʳte2(:@zӻ.KĊ~/Q|@KMbS&$7SmP.l1,n8_t˛[ewm3PxO՟`NoG7~yF0qZAi ]ԙNO1 hk\?vaϧee;`yH`)vxW 8g;:Nm5,c.QGg^Tn߇R\(UB?ؚu fā*,iy8.?&bPɞ; ^*ĵNY*dx;\M,-=-op(06|*wwTEJ6lTl,62˱ ꒹~[qMP-rx\qm"G.Wm/OJ_նRxP7 "($ZfOӘ(-2 \CWWP j QZ骁tJDWS ]!\y@+eL0o$]aQTߎ@611F.EO&գ֠`KUЦRI!RY"x_%oI_Bj^ ~B(Rtk-{j(ODk( ~=㒕EՒ)woC:G )44F*%RH i+)!Yf;]\NhV%T<Ջ+[QC tr~fr+μVj'7ֳ-z@W6վRjV6BBWZ PR*"]5ۀ k ]!\CC+Dk$YN^ȕihmEHC؆hm ƣA̬ JwF%]nՔP! S;U1T)M`MB@'(r FLNӈ4W#M7FȀ [B+t(th6^ ] RMtѧ=?(]9>tW7%9R\ RVFG}O7` R ]!cGp6UYpU@tŰ| .wB:ZWM+n 4 7n8 =SJYJzЕ$ ೗u++Dkt(HWM@tm8֕"`AD++Ddҕ&6,gS&+++D#J骑tń*]}ROrx\p-uӱJE(hߎ|m~zj7 `݀pn@Vn7JIth7X*f2-zd(th"#J^]ъB 緮sy罠 gڢwDY$u+j_SFk,86&BVBWH;]!Je"]5 =vװP Z|+DIgZ됬+l k:wHW + !YW[++]!ZK}+@))t@\u 2EW`+Dt(9`JqA$ ` J ]!Z}Qjo"]FuFs=&Or_RLIG-]Bn*k*NBģ2!QhRp<kD( =z% [h 724aiW(ΘyzJʀhkJi˂Cx"JMi+!x"pYR,DWYH 9;]99g >}R(gN0s+j_S-T0tp -=d7Dj ]1Eg?AWc]1E ^⠐mZ? \L0J}0QYfS" ݢMWb e"л]KFYH4u8n -;)@74y5QJDWXpej=vf}B+^QC,Quy +7ʳ]HWZ."H:]ڣ'ק+j)E#]5 F+*Tr PzWB#ԋO%ڸ†JhVndohqEXL3:.-i0./Uk*1_,VduI +ο'MLףd#-l+ i(7L>5~Wc0n}L͏܌VLyMR蒍{,Z߂\b*2p*I:&)P# {DFXw֤0n5)$&$AI9g4 5Yr]Ӽf >& ՉZ*Њ`Չ zD`yl^ (UqII$)zm>> ,E;V#+ MoK@[Em&[#L RZyLO'|1|27fCmLH.)=6oQj-cP(ƭjGT+UǑVVll:O'eY? Oyvį|8fc&\gu:A»[]ke2p#]E\ XƍoZX'0Kv_]=/|1Ty؀$0餗/$&>p,s ˰uרq|D8DSw ;M)yY2[hC4Ν#J)NւnGa2դUTk,w / xΑQ[d &(gZÉ2y$Q`ʉ$X :cRZbX2c'\J2\_f- Wޏsrr%ʷFrѨ?(PBј?HaDMȊV' vFUQ#/&8kPm4qn JAe_06Gz_*[)Gx dPgywI"Ot#X7-h`;H=Gy,Y}E Ds֧"3<ˍ$*))"<-"YJDk {sU7?UpR{8UEUq@%F^قa-leYK/8\޴/_5 1[2ϯ*0Kb1R'ØͭU[Xs3M*n4?cj82cW{ x}lvg' KL$l4/S:x~A2Է, DH@l(H#gH 1Ȅuwuбm)XQVE'צβkI^//@R$1,e"lB>)A.LiS1HDFlғ= B ӹ#e0wHqFp{HJl$XVޯJxVx~7Tt(Ƹ.5rYmm-6yv+t8k]C k֟# ^88>cYߢ x(y+}~TX2PɱFhN{($Cf۷_{$#\^oLU蟟3(z+$ ژڸMzٱl EKyIV\(Je$+|#GЫwY}iǺ$H;Yî1@Ʋ5 )֗XWu],=^dfkP"ti5IyCO#x@Ր9vTȺ.6pq<_`p#xvLF%2%Fwb=ņYp$мQPG8l%N؋uhX2͒آ:4Oo/0XX|??!IGMFK<K|l>4ps}lA304ȳǝgsA/hw؋I&KH}6b,46F14%e@0E2ce4<ǶS"Oqٻn𢔎5>^*9(cA7 Շ cإ4}n {,#2x+Xumս$dxձF> (a9磄wі9,ѓk00%*J>YgFZ KZV7I"N8>xޫ0x}ݞ_u{Lcc{8)۱;Υ ~C z(rVB}3\g Wd<,͵ _FtH{YOiP,:x۪ ki8]|V(vjACw5ɖuȭ-R_fN R]T~z;Lo*Ivr+**Deȃ/PH^=ŒwlFЈҫ86d):r@;:PxB[#d{kE 6L "zO mkHkK0Fn`!JF{] L WZqfvf[fH='`lA=a ÌN"?U&9 *NtϛNEL|ukzG׫yi}=G\nK2.2P( #x"]3ZX֚w4<ō~tN sDf&w?_rѷ 8x솗E'-zgQw-7ɺPiftc!ؕϑt_s!1aە"h#bq_dӎZ=`@T D0>) %rV+ Гv )hJS>IՋ{>mmQdT`7e#o JfAZ:o&оy < . a]6LC05m5>:PwA]$>Z] ^}%wrڏ->ͺl(HĽvVJP%8{0R+F*IK&I~nYiOY: @ࡣ=g4^U)-L%Sh Q>q鴖~ᢩeءwHnmm쏸[hg;xڣf،?;@b֤G+ml~p)fXἹ/[_ѾZſY+B_z{0-^_}-ʗS{9ϯR(@iB%b ߲*CI)4B"_XB^#v}m~Gkgb$$Tm46u=Z9Ko. $BLO.J\4|C8-ڔoSMMIoz=沰|9 fwa pFJbgLOQŤbj Y?t-5d䃲;#P@~vJٺ[c4pdȡ$R$YˆRJTJIc,|"qyAqo։Kٔڻw!5C bڿ}ku'!\~zΈH$nr\eN 40]A D$iELISNiI2}8nA9,gkE׼w+^oܭ$?yc9`ˑsS`PWi e° >h!5HP s{kcʚWQS}LU&lRmHe2)|LY EQi.SS,kTR;y,hH%UqצSkU6 qJkOSZT}6^?kfs6a-"X>{B#Fle=N=SxnO` a=S 4cXk#cPlu c 1PCoa"Zk[Qk>Pk&Q6 nRe8xO^8rWN[vߡ~C bTujj9Ϣ8lG2`^ߑ!)i9 ^rUxJ{S:cb;cr Ã~.&mZ7 6G6ɍe \wޏP̪*F>F}h7psm+T Gl>bf{)wLhmxj3[6Ͽފ*aqpi!6h9HjjT(@fgrArh^:&7y)C/Q5\foPa Pm`J9*^Aq9mP=ԨBWNәZNg&Xt6<Ӫk 7kxr6p*cj`]W30;fq:Bn6ZKe ,}^%jDl2b*EV ]8׉H|xL9BUY!ecv`1ieUzP.nUМc݊c\Ir+5Y #o9yRk?9/ e!WZWf䬺G-'.m1E& eMdJR) '.qXȸoQ z߸dяȰҭrH_ O?Y'NR:B^6%`1 f'Y\BFaC û@ &aH8W̽ǔ`D&n'li>K«WR)dӾ bns޻'oݓq~7x9|t*o_OP)vn獻]>˱r\$:![R.cдoS%.5~jJ(/\Ng)) s%" w",:2/ꬨ>(&Yчʈ]1B@y*ۦwVPw_̬#RJCyDEb yMrRDxwz6hHfNU&M1TaIari,[3dž yƄ0v!dFJt fM[l(>q^=AYuQ9'kJb: є=H (l 2]''G9lpgqR| =؁o՛a#>4sFA oXjԞbXGdL3J`M@iv NLM^ I(v:Nv~"ilj}Zs@jq_9b1~uLYV3YQ3;DY8E-'?&I.k|-avoGf6VQh*0VV?w #*U|R^DEj X&ĸ\n3{s d3, Wk:dmf׭9||_LzP  a2)f*-tiW@VΡ3֍MQ~6ӧz}%uMfgQ7Fm3κeX'+kTD f[5GD{VhUu\%&tǦQTe\o:l,f}:fSWM l2<'164azD7F Ѐ,$$̟._wM+҈npP&=8@`q00&_uuC\ ^3t +vIVy d겓W K(~i5Ԕ8DcZ%@; i$C|zv6L0nêg͆L/Ya*.zC8 | \ (a_ۮ$+qy4ا I)V;i`#Qa5z.Q@ģ_|uYw'RfhWpP7D % kK/R' 3g7?l0sm|1t't=V2O; BAYȕV1ĎӐ 6b4}i\!FP"&LX'6W;4YNb2)wW`F&Q$(Fh'D4%I79B) m(wknjIm )x3la̤;Ch'C?S@q$ CK[8C34/s^'qb*"WF,z\k^4.' pO~~N,gVXS;m-ABIdu^Օu yQr..DULP_?ew?$x,Kdinn".4{xn[N/duO,p!9C6||o0Zm#Gh_=+|+w4w$b|T)MurYok,i7e@(:JhɻQ(y<_3J.5xq9kENV9 1iaA;\wR SC_!T;MrQo-zq6'I0w?e Qޝڱ %G25r%[z:βbd" C0^81 %{ D4Pˆ+s#N;@Vbv*2*㻓FRcCdj52P!U™Tc'Mr6Z rioy[x4wadcbr?w",Rr1L &EZTP%8o!q:Rҧ ҧxi$p]TnCS dS+}|(0f\A}79i #?q@)c :m0A~仕NuTRU#PQ+d/KmH~&\qܲ=+0ip1gXt/և~aǘ*vj&e'Nɏ_w:/e~G:IWHqEQ'leWqf>'䔊d).P.^'TT00!Z-yDbM@oi]6Ic'^'(Ĕqte7kXPc߿}ݷp,zzJݳlu jNۯV۴+ 98j(1!6/%q:/kd!byb cd ͟џ:_EߐB_mc f6-Mϣbտǹoy"Y[_IcChpKVh`?5;J^rnql=鯫Gzi43ctr,=Fh&Ly2Z.Nrf.Ν|y Ƽ6|/Kv:?ұhSF0y٠e4eca~Ң)b{-VMh4}{$+l^ZB ܅b~qnک+[*auZ{T,N«,hQr4Q0+*3L.H8#6KSq :W)`aḡS0e{$׭Oj]ؚ_oܐ߬GȤra3M>-P8)aނkqn~s/h5' bLRlzi2\A 8G҅ePHł$,P_7<4IW`>|ByA*VL0"Jػmve6o}a7˲qoku9-zHAkp@t 0Sh[PxΧƐRGg8= =J[lm9Vzƌ0NT9-Ѽہ"ytd>UJ;{ԥlv`B)( И( N P]IsHz+>y<ܗ#tUiC!3XlY!XmwJ) lޣe*^ c?T #[+nheǮM#q iqK>J8")5+gQGY !f/BU`D7%=VFD˽ 5:T%L%݉`Iw<ǧ _/j&BH^G@ߘ/gS5WKNTO <4mѮ@uU[`p1)S'jOi Ǚ$XF2T~6!|2,pp%Lc^^?qSčL|%?Qt1!Ldo`}JaQvdSҔC)1ő.?L.*y}XXc-zꩆX-_l;K?`WK~^O᧬ AbFV3j ~ hJx&k[lƳ̞:wKeٯ&MԗB"ۧr֖rө%){ښ lt ܢu;XCZ 5FXH:kbaA3u=|VC5[ Mv$ (|W@w({P܉%ޮqES!$x;:Xx5hIpe$zw3hX9pT5h)?e 6:g18Y,^W% t.qnd:}'c_pb ;7V𖥉53,!c";KՄrJe-g {>zdT՘cT&!SP{vx sf˶4qCd7NjNke>\.GG9W'|XB6,哔|^Pcksrdv|MZ2]]2&jJ^ľ1%:] C-fIKRfM.<ɲ `pIZf>HD;n AJ?*(S \.qO\)T 5=7'.3z!J݁BaqWj0\sҥ?e͑t i[k-xI.er*u>|-KdI}7u}y 6F )K2Yp}yY 'FK g2aΤHD*EܦHS. '驼^ )L$zo2;`4 UHg=NUl^9% 'ZN888|\YED$|O_޳~PlJ\ cW"SbN<6f4&Ru[}㻶]/I0zHv{jȶ͉jv4?W횧5Gh}B#NЂj-E%Y4,!Zp0d* 1|Ȓq.vLN! ._ V hK1ۼtgXt"X{ P\|v=w#/AvZ`ݧ> .wKnP.3P޴ܓ#[/T>dҐVᨷ*j`b.\avxFPd\`c1aRXtsφsx\ʐj P>*6PW'fjJ{=dX9zE,oc3e/ClW 8M{ 9Vd<*p(cFk)oT a=={pj>L{qtfA۸FEY韡lev*O¢KSS,!V\"RdRM*A5 ,l<0dCb4*Ɇ!ʦ7Oڮ"SW, D(F[9\sAUZ6%wn'"X\ћ}KLCN>`Wta__ G1}rbߠAZ|J%RAGusE֘Rd)SpJeɲ>ݤSH15WngAҜ5zvcgBS N 5MZę:is_5g懾ֶJtf&QvT{:ˬMkCr$huCYVܬ=u@ %wѷf) ]}Sh ::0lV -n,^kZt0[{Seh$q"MAB%)(c\j9$K4=om|dgo,KN8%rsɳT!N%TBF+V&^Vt+bgxԍ5KpuE1Mz r8Ljғ3j"bu&ł$( VйB*841\vSTW9FSxBV,Y<$tH'sPBJRsfr/qޠ6itgAM oU -S`-?,Pp;hjKh;jh[2`œj$uTGd]9?Q\KP=V%W$ƒbZ(S`Ii+hd;kf#(h[БNIjX}Sw#'DH0 AC; RŬ&swt2_ <ƩC/C 8;#.hA|u,,i'eFu+2ɇJ lt(5- sH1GfR`~WGLH;p%D91ߴLXVʭCGfM^KXz\8vb =?E%l "t%c6~޴N$pX#mCN3D v9L_Bl |tZcP*k̉gŋhj=Y' lvTFKi,MACڕCZ,屢oCBv يo&,/٩^2Ml*wV;:uyI%^8\C lܕQ lR05) jFtsSVGvjjKM{Nh;׎5?箬3-M{R͢)u˴$Glʫ_#H`~X+EE[O,*,n5lTJCZso >[yd{Ӥf:3N-1+_$wKGrO*gSaѺ=uXQOa:R$ ᧕@|%S qv86 C8ɨOS>`8|f[6} |m%]?~>-m3`-.&pSS0E_oE|`#=_13+'ͻ^F߰do}im6>V p&?Wٰуtu{WA2.Fb018TItn` `?$] fe };\EA xtSȞ'#_' Dc: 6vx*AWN"ąaX ׄWg6lmd%3> a9؎iU$GeNZķtXSҔzi8])N a,|OyOFW+U,eBl=gmc=.:s7&=N{N A{vB PP6u=$>GP@ xT[0?^_` q[+1bfKJ?v nDcO~W?'uљP[྘p(v;݋4 4uLM2ޘCCBEﱲ7WPi<\-'2M k' -̬hPx{b9%c h]\: +=(j+$%I,T#n/kKr1\N:cky0?(GU0NyVBӕR/+o}C&3Zce$FFR79A'6qjաďusw|V+A<QA 1_YաGW n@u8A\,k\,Lr4,1ڞpg IY&#xH潍5ގog#T|{ Wc|+g]hYSCEK;y? !)ql<5o3bIwIc|%ؠ}t.4FGq|> ?$FN'58i-sUuYelH9xַ6?8MzpGߏ!ͤso\ /8 W7GK7A2_BK}':WI|Tȋ$+.^jjR޾/G߶g#KA wW$F}vٽeVY$pH.pXZq>M?\ }qZh\aydWuxsF}٦ ٦{Z!׈.6 [íxPS ZCx%>B6Zo9*?Gǝ 7׺u; rPv. ZW 593Az(odKfJE t&B]]o#+ \,m~V ˽v<\C˶Fg)JԒZ[-7aObFo07&~$M^g95 )7 S_&IX †?r+{)lJ1)SV23UtJ1 ͇2z%KJJݤe(goNZB Gz0Z!ߠP|9Zٷ9bbˋ[1ak14Xu_Wu}>Z?5x-2>b*uň-XNt"'awEk.vYmYy4@YagI{]HDV4UXǞ_A9Ȓi嫢C9zIX+fjPN*zo+:.d:'*}zR 8m7)X+Q=i$4Q sf?.I&40yT?]<҅]~0[ zd3}qV^7|n-HG*_DLD%ܺob#,Mt#oMl41πMFY,?9&ǧA+A+;"Q|3菷|V1^ߦko3;>gf]o䅱_j~Mӕz3564cyܡ2qI" jY@G<{=t8i1 V%Jt y]L۾=qbB?XJ&Is[LAYwk {F!MFDgzmp4BG4RX VfxH VQEH#B ,FHCj4c#=])P!ip}PN"t!qֳt]4P>˯ş-F*, MC# d>1cO_X{ Yk7r^]e 紘A=Fw5Ht(..\OPs}+ILK_luO9/~qVk^CXqZMTj$꜅.wV+`LT;zϕbV .iDC+l]ГwTw#O\mW%)0zC> }"uXZbFJYY UD)O a~Y%KqۼlA;HV9AffwO 퀢El변`GLWUi4r?XlǤ6&ꡘ2!YHli2sLgkQxCNs,sVE#mZ] ϝFv=.X=q E I0؉jw(֨:2TLu6(]$m&5"7FHr,wd E j]{3h{$YxjIݷXfbBHPq6Kv+F)#T\hP;G|bn>\Fr}Sǥ6[ɓ*)S Pe"ژ\댥PVC *j6ZGº+ 2u= ]O5(ԩׂn6>ནLHf2B9\FXnRL;,d%.A2_r+7btpDC%lTզ0gK1Ns3gygd",NW0Ң4//SISHB%. s=,M!W2t7OUO2EB!/y]c7C%I2-$ y 2S+#-5rI{(«ce䨀uCey>v_-"i#VW=F~XAށSSH[ t4JBw c_@fN1x o7e:hE,k5*WIEb:&Y1c\X__^sJgPkJLta2mYOJsج [r4 5.gؤI]!Ԫ,\<\iN)I626٪#zXiiLǞ9-[ 8# ;i{4庆C-*Z6`zx5]Tbq1 '#os嘝y~#{tpG{x^HL$gL._ ̸Ly´±p 91"v ]ė&H`jaDÛwܝfMh9+!66]Ed׋p0kpN k+W$;ܔl1isY^4-r)V:, ,uc`Z}9_Qgd FKjj[C*Y߿>Ĝ --’:f"TZy={:e8<2։:]t;F:2uKAq~–#ub)IT]zwi0p2\u  |3p7S4?"ƤHY剩Z+Sa[d%bhK|~!jׯi+qxu(.X,([)`HyNy\`d/-)34c3:gsgg5r}P> 'R $7F9YAEΔvul~4wZU ҡ,m5WزuaW{ >WaipeH$-2 sBԒ U< "`[dGije'w>W4"tHV%Ik@5vxgT}g\ ũJ̋MS, NgƒmEI!2 m+{UݍRuIƸ!-%L׸?ۺH|uA1[UdϛS4؜eFH" 0) ""c8j=@iydNLu[6~meQ|#&CU{7g*ڹ@NB7#YߐǾ|`AX!yVtH%;\(M}>q Sk%,-S tṓq-2*qaX娭SN1Q l 8<̦# wkW kӖGj[mϊt޳5 [J+!s9ٙL2Y\K7ۄ|^d,:|+X PB{b!̴(aj]T[q 5j{"?ʻ6k9ʾVtď.WSZz#UGhi*Hka5>hST"Tߜ̻6ը]ߩYwWwVUgsyl"E`l+r61 'w2xu)L-VO.b6dӏ7tHlaZ [ |z~*Ji% DؤFj*em\qlʦOߪ!CTp-T.}'9î㷿y~IF!5-!MfVAYKvkT=dflp8**ڳzL#] M9?2bQ.Gx?]ɿRoYke:1fg1#/p wLۍyI¦azʑ_x93Ҹe:}Z-H}~jmH&@ɨ)s]ްދɎZUv"]jqrU`(kd;1 1?o)k1$'_mղr[?ù? '?!l TfU 񵤄3#wS9TQW޲z;N<s@p|Y",Jbãϼ )Nyst:^G}&0>v+d4#d5 ƊACQ%mUYo :+VM2|3 3bU5%!9ۊ~̋5lrP 7ϳ?C> `%CZ<0B6YM!^=0&k=_j5);ΚаrVY`bƲ%̰\A~v?q9ESfPDu ?"+);x%eV*=(sٔPkNكzy`Sv\n蝏>*YLex-|v;,V9DFڈIļC5*-ZQ^ ngx4IQ@s1xU_Đ1bk޻_frn62Zl,>EC bK,eq:?ƶy!ED+N_&t0y4"/"diY2/E{h_Ѹ2},'{!HޓFCƐ\a=fI9ieH%IznHq~z\cYHs-UCKTGllZC<+7y1k8C%p8L#Cz9 ^۞Bмjއc;M4*ȵƷ")Ѻj5s$2ʏoIRKIkiϨ6!~J?0oFCǏ|$~d>rTgVR`Րso_+=uh峽 Re!dEUWQXA&\?G;Hz cH[KwbZ!=JǓ #0bM~ R򏂨)U߬ϸqg 5_#y+Ϋ(K~Ta|pRB^٘Nvp7RH}FKޙ? t jD$D̙Jg Fq}F-2DCFGۏ8"F/4e1U B)3Yq\AՎvL)P2N~tNӤI@k{ճ[w>0s4Yh! N˺AC ;̃؜.K=B'atb(ꃤkrWI8f;55܁Zx#ELP-\*&[^P\:"cn cB k't1h%ğ/flz /A%+$r%{S4YyxVJKz,I~D1`¹vʶQMVxyhq+MgpYci֕}A(*60h,$|&;~Z$)CchwK႙%#i7w}9ݟ/5M:5p+[ý2Wo3lf nȗ#.8@Fҋ9c^Y08,<6rJVel= ˹JrLGI.zdn>;e&XkKu%0yUË'Ox0_HXA謘kK`AhT?ى4n`-Iu*:ajSK~hqR*3{YviIp:VD%db5{^2VC|rEIs9c]H[^ՙߩ&sS{i?9on4]F_]JyYceɍr'7^^O`tiEg׸g;Ƣc1&Rl4`*BcmR^Fyƽq-CE]X?wCk}vQ90/#;#:aF]ZW)Mxy܅|6dU腋RRNqMwŵ=~@[aX>,۫tpIVydb夠c')Y֐I5:UWO elY³a˗ӃacGc7IGߝ:aOl25Ĭ#\Yd aw+u~䵻,^|6Ae gꗧޢv NϿiҿ,2^f,Z{Ű0+,(]IRAZ%R%2ODMP\K+YZI'}9\XAVu(Cz[P|v`W~[Wk%. 5NoZ]~[D/R?gLuz-̭?|IiB`PuWD,Ѷ%(W4njzHؽ/_]pFЙ>Ac{%Si|s:MYC=pJX۬~|z|,?{ŋv5l!3k['ܺ|#t\!P`D_ID/vxW %/RԀ&@ /o֟=a!N~ut7_$!I@?]+..Df([UR-Jg3zXm{b$yoSk SudZ%*YVb*y^6?K3pD|ml,l Z 5t0%G[`ZCׇz*53;萬Q,rA0T*3”NB});Ⱦ6WƢ[` fK l!maȠG,r)Ju5mxrԊ=PtT䟳UWFsG/.%!AY;4.Jڵ6Ar}Xڃw" YT_-4Z>ȸӍR.x'gfɄٚV!gEzQ:r% V0 MQ{o' Uފ1 L 7%—Ǹ4f?UI|,C59U!4%kMc2Ŏl&Z9|NM0ݯxW2`ocgbs6eMrkw49EsދCQ_}J\Z+8WZG )8%s9K#MFaWJ mNr `c0K{uO\c)ۥ\T; 契W7naex\rʯ~ܒUEJQ,M"j>52[hI%[$,DN#S64߱`ulѢ[<)1O~=[`M%[*wb=v{3B-|(V{3wo5 )F4*1EXrtE \vӭXY}a}%jg*L1)L Djkэܶl̚F9KTJe *kgd9a];vx{yՎGv< l6Ҁ:Zաcj/-xiK6^Z K;2w-)&d)XP%$UQIޙ ѕ-dV=:gfk݂9ss#`|=+V,sעbIjYz` j"4 (1 U ߹Lso4K0ϽY+s6ub wkt'0 nY޽;,'w~A9FV1%3欛9Sh>w2,#oh|8]T6dFHe4vMfNrtXI\c*PeS3U4[P joeN,Z218YFL2j5o29]HC9Ϭp۴/FP7f.| yӫT]*y5~N& \h87ml=QhBxT"NʙuQOZ6ɕT_Xdj:\ ÈVT,Y%kIq?<DlFRRd#$^݀H=4o,k2LQj#4lw3ɊU+/LK-mw~{]L5Iw re>BI1+Y1~$;I u)+6˛\ؒ^*!sVNO>9޶{ۑߕy  s6`f9DUQ dooQ>:u\-ҖI8{jtZ vFb{"E#1}oNw|}T9(]O7+yb? k1wڏqIH9Ncud.`_Oqx_L>Q"*BAjrrqIFpqz?73ӹ7Í WUj;ӥ{T0eP^ E uP>-\KT5H^Mեp W"GuIz']Ztٙhÿr-˰}q#)>qkHO ~?9Q93* տߌsޝEԿZ׏l_ /ၜ}w/qǣ/&ӮkY<"Yxy?ONNCV=ٜz*fun=vV<$otHQJފo it# *1$wso4̛Y10׽㯨BJ``ትh &ߜp&HoiX~5y}68G[]an` 3xS@3]PʂRn|A7^^ BR/?ьDz8Ʃ ךEwqo°bT켞Jv+CY-:.XZ1U~.{ZU3&7Ό#3p-\=0iX[G!>H#E am1{qFy=୵bEFM^3_֯wb.ʑa8ŢT"TaX\ě ͧ_!:>|U߅z|k!/Nw(7f ޱB`y fQx j]zKk/~[D_CyW2o&#`ޝ}3C`GAK ([Xul%:b$);0 hU RR݊+}ٰM1}1Ht#$x{9뒟ƂWݖr1xœ v\s|p`ȴ` t;5ԓldc/blC. 6p#*[eZA! l[YC{Kc&K/rЁ{c) .cVFo}h,>cu]¼5:{\^ziFf=rҼ֙ɊsEj=;ykhw-; 'ٳt!eIoCq<ԡv[I~pj9y?yNF!Q;0cLUtaAV8P6jZ%?zJу)fZەZ!q)jFԨ fgE$yndS[Ca#̇坫6Fd6:nx5"c7iR dk41yXrchx)ZԽ@_-h<:e9U;|uK%ZvJ M'}Ɲ(SH[. %gIIy;،EGCdTsH,X2++)2c"@aW~e}R|/^J%V cu2{ɪWՅ1KބU ȰCu{H>=ڿDD3& 1Qf}> egwy^q#|W.E7*!j(bA׃)xki/][M#^H+B.u7ZgɏyfPPa2Ӌ󿨄z 3HZ(߮JRy mj>ɫy&&;5^KPH=s]X}}^&Z%+ds͇WsczKY1nL)3!QSђ Qz)Wz(F fvVH,[:X$ZfM680(톚jg4?}E&{_ yO>#vÅe֢uwU#Bo&_{mIScqT?I`-ptb;+p_Ƽ\%]*'/WQ|gFadlmxU&uV\D.礚w Xr6f6YJʥg5s`ÐTG]cj9r48b k!͵F3ˆ/CkGjNُk"H[ F8E39UFbwLz9 CBU;?;,*w 7 *40{فIg䢖X4ITU'Í)Q*!X="YC"N߃k;7HTAEZ4#6ZEQp #Xm !*dU;17\ }wSd_cD.=ۢߜNvD L,".TpK#*œoZ+J;G(; 4sgv*<sy盡r{*T)0ctd2dieɔTJrJz&mIg.&7INsh>eO\S2P4QּK}$N^Z=`oxTҶC\hJZ'nN_rC٥u)w550W xT;(\TƚZPU)宺{Tq d4GՔr"n+mj z"U jFtPS.5hϲCQg<Ύ:_$ƖY$7nrA}WrMBwqG3LoUn?!a1 \lMI#J/.sj#Y6:l2$vN13c 㢭; "/[e=FW&%Ҏ9MؠoHIh0?{V߳Kuub.0qtr`Sg{i"Ӝ8-Pg;Fg(W6?*~QF/EAk؆X|}WWlO@^` VXV:@DSP4h d2~:m)RSj\pnfɳIn0>NؙQ5RN]oN, 9~춵C}|>'vaə3ǮB|m7RX94D#R8/v{v6Qqn9,i8|Jmo ƨ:uOdXM>ګȳ4 '(S#xd(ջ{4xJYBsuZ nĊ]`nW!ŦHt π:#O8KWKZn⫯u 3δVȌ\$s׳}L||;d+Qf{dᷝIWE0:G ο׉3G]ęϼәq03c퍹Q+O?PClq!o#R֪tiҪr8>a&tj WIm"שںԢM>" usDd$'j |/?-Ks\Yz8:{zfԭ;FJ.kh" 5 LEO1 6/ ҙtjwN<]Ŗr AF`DW?2[+ip#VIc&060p\ѫS"BKC̑+a/F.V\0\-^XrNa'{sNp*ZqcaCD (w4j0* ފ.Ll:tۆV^qТ}eĒ4)߼6[EmC"\ ,Wax}6KΙbe2[cNuq!43ʀFϽti=^~Rz'"G$n:i$"AAJ'OytN&Ğa=XMyU0yDO)n#{Į>J3HLj-3lrW Gf1x "'_k2NlkH9 Hb@:'!1,7ҰWcyjCȩ=H)Z/A[ۺ_kPqh̏ofǓx7Yl>5B3'n' h..܅6\=QTSYC;,Ɛ8;{A#*ZllPH2ć { |r|{BnI+I)ؠ8q~AoK ]l &+O6-atإLB9.!Ğ+U9vknɯN<-b]5QyI5PvN04Lks$\Xj]Ä`NpުqZ9Yݾo^e: 1u@[{Z8Un7W=/תpFªdRȽ(+ &CR0R^'Mw8HV85@ڛϏų=m<+4DPD N #|vr0ԩa6xZ5QζYll$MNDMD؈[JrϨ]@3 x|"GeK@Oq"FX@/Ǘx$}|ySf5s4]"1iz٭ʌ!21T 9hQ%| k8rI::=ƨfvLթԐc9N}\,cj8Aa.}v  SL;X SR-WVYce1櫬y筶NjŻ,X 7Q֖*v[ʆW:፳&Bmo%zI3׃'佷Erug>oouվ~>B?y@nَ@N+kei;$kj UnB-1hn1BQsۖ<j/rt"^S?:fŇqd -F:)S[Qɫb]rGՌZtL޺.H5㬼V}2:q +cI%Pʳ>}[ءp*QԜ6s9v$⢃ ]vl\ɗ8+שt K~?9/ ǻ:<h-ӄO Ο>!`Sϱ0;֋PE{;*q4CZ|HVޞl6Xۚvjjt*.Z@+aJM0[# I+xwo!C`B`Sc?۾C]zoj=0 (.b`T.ƨʞ q):~* tp^8Τ}3O&US`A[6Uk׊LZ2'-EMd!̢Xwbܨ``y_ͻ Bߓ"Unj)W3r=n< pZ݇`J*̽alfť8Y2 kTvϾ˻+!_uwYdmBjMQᮢoVY*i}AktIe'q @VpQ8e=KA=T=msVl}LƆ!Q.1n5dLӜmzks%yJ(]~X bg(̶w>x^sX3w&><>~ žh{sqѽ!S& ̕Y3,,qq=6ShjK*A*ۯrTrMkp.H5nٓ3l4 ɂQ8,%O%~)NZk1d@IcI,X hU:[L?~wv+t5!*$zJX"@f[٣Urly2.@)6egKL-fb5X)@ ILv`a^IxnYhYlG0>A)K*'[7o<-EKPo?25@2k*Tq$@zjr`"{.-qcQ⚑[Okw yЎtxB!hCMTqFw"LƲd]bzhluCcbΥ?8 %ۍјɾqI'"K]ZUt*KV's-mMXzEcΥdG=RU^NܼZ(:}gȑ'qz0,!=ݙ;o-y>;h#vbV4PO7/}vx \ @;1a&Ǹ;xjųaⵏa(#!t[qbKrz<# =-@ްRsԷ!~c^9K G olj^Ⱥ^ݦ&f]55GVQUTVc^"@+ ^~gX>0jͧT=6\*G¯o^ۑZ-}ʤ|3cTB/ WFD]"{$YU!)*:0SJK6K`/:ԔԝܤS22:5'%d?swQbt5vϸA'nPYxR'i%&O O+C!Ğ5{Na(#*Od9o!OkG{hbCktSRKR,Fu1QңO{],EVʬAO>_.Cr۶}0CGq7iko]5H[T ,Q..̧[ q\uɭRk-͙PG9),<~IyDkzSeE(+Z]YE =9u G>Ŀ"EѿjoB)Atqz!V6n{͈K6{a]7OV= sW{[ɽsQ:=iNnϿ+p߮W70eM Pհ_}D.aa-޶wxa÷R]׆u1{T@ 㿾v2*o[m4 9uH9v XZGDG#l]9c7 S)UN$ϮҒHR_CjYʔ8m޹@dь]d${c1`TS%TS5m5J(bh@6QNo) j%UgEl09uͲl⶚{KpJeX^)'W:p!ߚM{lc I!:%.2HƑ(x2W~2cZ鈙s톉̂9u8 ǨYlEesVV A#:G2Kܺ" LDC5 ;OS u3Σ")t4Gi#}\y/w PR6zsV"SWU0)q4jwj޾*fyNύz*> ܰ(u  TՆCN]ϒ lB1Fg[ݫ(h{5I L"ez9:̼Rxԩ{fߕ:Ʊc'4)B)YN҈^}ؐ|Fl Sr>lsW)&>n K^ܑs+5);9ô?hiʍ|ޑmOKf|C;WoٛY6{PCgL9b4fjl-3>~~qm+a޾beⴆHT-9Џ*qZnG^s }Bڢab&_ T [*׸Agtɳ`>ƱXusTXJBH*? GK8y/1k>6[QkK5@ٗn(BJ衪|G>RX"S`F|lD7sg5[oB`6ʍaF|Fو7HK&[{8E[s- 7% ߽i#~J%M?pP |@{W[_)yВ^bbG~o_̱wQõAJ.~a7~22ݟ_Jd4ps&cSlBeYSg\II17!AW{+e@<9M di /%7p=|#tYWX |VXQ8Q2EG$q]a}~+`HGTxy]C)%n>)ke߲հPc1 ZGY$rjܳ$uuO!Sƚh#|LKk/e_4=[7, tM9# vkJ񼫨d+dɭ%,w(nEX^ЫʣVԡuIV#"?+wNrc7뛰?6 Y狯Waˬq?lq?za3EU0HGOXGb&7q̝a8PRF7:c);$UfEXe($?VKZHnzYfG'e[BP]ɻy9[ĝ5E hƈ XW=E;ȱIUl\-IylLָqR%)Q+־mx&Vx!?̢#"e' [x "Xk_,& ; Mt;-Aݦu91->`TT@i] Y 9Yy:uޛT[:|,]x]]yMm1XW//iz۾ٴ=Q!~}Z;6Q>nlϡƢ{/EtdWIr*RJ)ftMyoFe N{Ln2+H!9ˊ$dg\ O(GNZU"ybSU" \KnzkĖ0#1y@TL H5,\{8^5Lrzc)Pku+EjȠуtFpUT7*sd steLg?/OO:]6sC(̯g T5 2*f0iaOOW8 .p+D+[k#|"mgԥzlXSԁ@sJ`}MLiG5u`Mq=:?:xzs!C(gui(Rکǂ3vO옝<9+[P;U9X.-26m?nev+7kcEOlGS)c>+PWfTh 20T*mt0hWjR |#rev'#sv^8F|@nvCGֶBdBnY[!н m$;.nNܣ F@*!Z;q F5<#U#!<(S'``^ã;O(:>jnރ9P#C b"nvS;U#<'j׌N&0otJ3ϏbvMjxdjgBVJn\$t'E_Õ=Ijw\$g9QH9G< s-Ǖڭ?!P;U&( 0cuJG2Jojj dQCp0yz6nvC')~*N|XMQ<$L2׸D +[4E}NSSȳS{KR}a;{elN/}gn-Ƴ9g*ًڬn^U-RƔc.y͕ށk԰5R?lu/ c5;ₛ^؞Acu2U}!3#v|JDDntfp\ͨ\1ѓGI$buCv"$],%J$XlaZdq a<GERUbr=y]`W=TV8# PG=8SmCjErTNXj{:0&ve9 32WOߟzk>"~'έbI䂩NPYX9T(;VM R%Ұs&WsU\$ߏQ<Ady.հ2EЋ )=%ϑ$ژZ`rJXXMiUeUI 36X$S.alTv BFj^|7&tX> 8J,w#0`zRbh٢?gk*L^O?^A9Cɪ֞Je^&Xt VyQS jU)GfEfnSrve`1#6NQ'<}?ʛ| K*y&5x1 4N)E`ݦ,U +0}<;Xνn{rKbRQFƖMQ ec,5ޜvCe}. zkQ92.oJP"'Qg!A餂BHD n x0gvQ(Ym;}[瀒ku$JXr+d @@**/jFeWmd6 W&Ť`Z"%%xlV~ot[FU"fz|C5FB@mJDy> &YeFJ2uڤBI@2TlO:ڕȚ!TT9HۚȎg $:4itX-ضnCY#kPRx݊ThLNF()>&^Wǻ7f[lx4QyJs@؜kN]]J'oU嬋X;tW<=}7_w!#tk˜=$JR[Q1Ds5u7oX.ι{%ZB^ i.gK{w5umZQ7r4s嘜i4+&, D|J{J1(AbN<]Ǟqw*/RuVv)᫔opgoJWagКF}AtϪY FQ0+b,GU^H4;O83T^m+;9RT9; R{EBUx'ÕɨK҃z:#zjQ3 VBM=SNy{ZQA`cA_緓0Q+vW6綺G)X>Vt;mo ovn, _vYnt9+7975NGmA_G5B5r~u΀6WgW//<}\ջ%S//3ۯggN?^}}g=np>O:-ԛnSOF_Z6Q'We"Y>E\-\P7笑t7Wkg9<KŲO]P& HZʜ % Hr#.qsR<63JGQK ;8Yj:W <֖5TQ8HG Rq#E_?~\)fmFmȧۻSR/$? f ff!t~KŒي;F *CPVYQT+փt;*c = Y_kY` !/<+:˱E.lJ#T,ORvG"&Ej?^tY%)RۯZCQ) =YE(6:=9hw:eGB|r&OO.Ĩ!*iUŖ;;!&BzwC{/ƫi~tU#r3*dgO5 \mMa}mq^o>߽1өF`Iy+v`O-anm cwH;Z*s>T]_~kǾ=f@K9ޫ![/Ԓ`oDt֬_R/=IeZ8r:kCi.O1}T5r(]ϧc@-8GoS>EuZu9\W[Xq\^%s^cwL&XOt6u.\nmJ> ,ʭ u~~/{e[Btba_|6Zɚ򋲸p%MB~1 ;b j$ Nv.YI[h q@-:X֓┕P}*=J@阥MS|+ѫ<q[W2n_y5 Ҭвkifmֵ3&X+L 6.\rgɔ1#^^A9^{{-351)莄GBӒ}i*]RqRx4GM{0cja#1@7揍SДK:aT"+U{ ZE*YGcLt^~qE?kSнy7;O86Ro19ή?N|.`bhqSLXCu^ !e3p?dMMrL.? <,:i,ٌ:ȃsylŚKpE3UN˳:ʼkݳ1bSS]A{^y걜^cPZcl=o,٭&*%Ȟ/bjJ9 "]J٭JNs7=VZ{\ JFc<+eŢ'<cQ-uYx* eȡ(,+9t1F^Y57nCbwS):ߌkýzauт^x~݉T} Ԧx9 ƫB&/;"roMd,ۘGh-3'g86t6X81=A03 9UvlU%De^߉>~\nU+PYcMrmV*򜔩s;^:67Qk[9";͋H#_J5rw]AD"n֮wbȢq)-aCրḇXˊ]>Qkk=C>/V (o*,sg/bWo_y5erL$uwp;խ5|9P6]<)6QZ:IMW& Z N&8٣ޢ);8$`eQٮ蠌ArxxHTth6):dA*: bvMW>wy6]\6@tMԦ`~c%ЙlG; `^lKPV أP{yݴ`O$DyqI]2V>DǭDW|?XxߋY+!ŝҸeUh^52UK1ts{?:Mg_ dӫzF/PYm*VwY`69V@CnquЎXs ' R>(س&2VSg./ɋxZܹiV/=֨u]yrn*TdRhsP: QUJ[ S<=sOY {MX-HJ2kܓe}^!BHR K&6h :ϕV#b2R2fYNA VQ4UuD9<[jmJC"*B44hjP:̀Z(z6v;#V!:L*?(ygx{YFX ƴk%Y~JS趖᧟D7&ꛣQɳ7[Q!/gY#&kC#J)"3/DbQV$TuÖFO'Bk9^Ĉ5-͋CPmSXgO~\gF (Xxhp 8^F[R)յ.0J[ @*+Kl*+b~Hд6^6ے?F˫/Jׯ*ϓ|*G'G}C>+?_?|{/~"I˄GSM?~-Íw|` /}-1|%-Ϧ' Cu-.\IY."J0댾0Ve ^5gdQS h1F*1jwvbL}guRkA O[t-Hbⱖ[P 2}A%),PeÈ>ʖ|0H ZωW.b5y[1=10K`[16 tL~gvz[e.4 V8FRX=bz HqUvH~+CW72lLkߪ^Mf_<-,ў[ih֡4|H8!RAۤ#ȭ:rю>Čksk2[AEJh\qf.Oc8zVZjn7<Y*}Rh~Oiq1HV|+:0~cƃXԇ-o*f,Qk]Ǎ Ԥ95I1TM 껦N?e,#0-th~IPV8hVu||zr~V'77{w{>F8͵;;٣ma|8&f(~i,b5x-ĝ׆bl:^7/ppF6^8Ts@|xRe B4zRiS?{!bs@C]'@A,0U*AeM;g UUk4Ţ!߰Qj@/_EZ ppĨZe`-X40lbNFq j(!bab7u֬)dvx!O2GezK<ӵЙvFMzk:ShuѮX>S.ycY xX驷>V/+j"phzR hWRpbE(6F|cv˱dlpj3 DYՙH(adelH@WCa(HRd F30bȿ\>6|Ģ߱ZpVb[,+mKZ^[jU[8;wuX#*CEJv%$ gYc~]8v-w%w(tHֆ^zۡ7MPE3.4aB;dQ*3x;F`_ڃv )kq`SLfQYM䪤c%+&Ï7%瓫2*P}Z`?w8xa􉚯_Y)|9SLTg柟ňe\Qϗ6J֗.A=~BѶ3#2&t~ӎ~+` /vLu J>Qʚb{;z|-FS]қVOT~u_R,8LGLȮ ]\4iE/x4S`‰ d"UĪ:/ ^,!g&d=AVh]~yYYpIyom=|eFzrfmD0%_Y=<~?g/oIUV0޷Hlজ{ǣ!ttzTsU TV2MrmV0J~?ݗ~Q:'FRo +*PWy@es+G9 :cx/e}~*svy^-jllNP>{_~Km* uAy؅j`Dim8ʠ\P7yUcQ:[@ X*~t.(kk,nZp8"z͛wŦ?jU$$jۓlgѺ"}yⅣf~g/#9(vۺB+Gtv=/_e9:k6۟9gqvq603szә-Ngn Gds)g)I(C.37F+#6,!,NnE6na&k^N_:ԡ(D+9*3lr!Hgnj[XB\Yj~ƈI K_-c-բ]7N,OFvfH]SWl!/4y ޵q$ЗH~$[`wcI8$ i3+:r_ % _f{,bMU]U]tom|vM|7)Aɜ-y m},>gNTd4nfZ~7ofdlSKZl>G%60݈ƒWj\@9 zXyLugz*΋k/-ƨ?i,8+F{?ex\WF*%&'#]T̷k| "`H1"eQ=uW7 6L ]0=5Ki3ZؼG/{B!CYUYR^H ˀ#.bsCCP^2=ǹt^ӡhikdoC(_<_|i棟RFsT //*tv9`fe4_Rmwϗ"OSWN {'y,MeEUr5[wmJFXZTϛ7w7'.֭r:WIYQrYZ'G70̕mR!.6sÈ {oG.T0S)aE) F(b_F}{P/`'3]15︉5 z˫`Rd̪5x}e^d4Ep! tE.c,BQOڙ7KU:(JN;J[P [kggԀ#qs[! }V^z4D)jyWW)^]xuU;^7'Y@abc]pcefV;I#JKB{G@gq2r_P t[s}rV`^դ]_EnkwE(0(>FƜKWv* B-rNjԩE Ny+!=Fվ0_NZIYڍhFHH ׻t3@2'3" ʙBv=l)aX2d`ĭ;<`NW5~.G"`>tQRUo?Mхs+Ɋ!=#M-6`jcLJh$Yot`yb#J#4$TM2gaKhG r1\i`E02Y/qXp1VZ]YEeXt 3嫻Q:=*TH̨@!Tՠj'NndBSp꧹a۹K,b#gb55"#5 RDL[,p<1\*@G0{S ÙFG1V1hNJ`CbCyR͛4 lM#ܥn[a .S O@"G!NN9l HGˬa0a. `LPD=`I` aԣ-Ƹ9KrBy|TE<p"0A# U&#Ft̘ڷi+pS N9 tuvsL9Z֒V3%a#)?Y\AVȼhV5[)->-X #ه) Pbltܦ3T]?Ml$֠σwAXh<[oFVh;/]?dncEcYd2A,Ge{?0ъ}ojc4MP-mߟw}u'KojDNSZtk.Z0nxgJ.EzNw4un.(ʚ 悝@"yj.vqWFLq=S ǛE2ƭgr-5s(1UR ɵw'HٮdcQ:2Όݩ!y: QbwoBN}7!ҁ-.|B RHaL􁂂哥uE3E@ۊbi*l r\,r^]DgK. ,lJ"QE!"Y^ K+t ?I`SERvl^)(mu~f+'Gzez2Wub56=1k7}n܃ s\'\rȃZŔV;¤8=i)::ʃGd%s ߟ %aZSZtvEhU݅k5!?EwNc(>V>DVse q2P@ >>bf]`\a*Y?u4hgkdUJn=CggeN(.[W/8Mw#,[C$D NS(OḉGYE8 `XZnj.(fvR((PZZUYE/0>zeA !,a,PGw" pgqK|'e>v&u۵,"AKp}/&`(I]QhW2 *P{ŘWB 0B t(e/J^d ^,$UۺCmXja31D"oSndٓϥd奂Hru+;{E p;"덑9YD)~v39Uқ9l^wPx[F6Ko?f `[TͿw<|SVߜ߶{]]&&w5ĭ~y//2C1Ye~KW_ν5=oF}{\CV]٭6?:< >bԼ㛉\Ó%4iV"cRmtzxݖW/(Д[n%S'"[d4RM>'e'i|LQ8E9hyt_{]5 :y֠Mh:`Я‡ oT~*msŠ@7 ^ 9#`P$vv ۍAÁb}>ssҚVՎڟ\~~J b8%4~g۹J,<%tu$t5)\dtJ pUۗo~x#sw7y7gNʚ5X nU)} ֦ l]Rm,^dYҶ` o[7NW#Uޗ}vq6]xi5{7bZfHլT^Al69wg-[H{RTL{[}E8ۃ}yyaw0PG yXt>4fSf-BR X}"A 4:н'Pz;zn:]VU:A;TU;H*N 4žGs(o f}.Jj8!4 ʕV L :,K{$EE@p'wS2ϗSc) ]`XxP3&1@2*q@Ft:PTsL9ZPZrM!6#S!d6wIe2;u5I@(#KQI_$JlP;T&# Pl۠({+b^y\4(J (+쪪QvU.+TqCKY|JRR*u^Z p(v@p5@  yF)u~1c1 vf]bFvf$neF6zjΌ$;=e2d֪"@@O\o3:BA:M.Ĥ&nQ95h$v4ȐDݢXvm'C 5Uץ H`'u}cS'Cl2k%CM@zNx$v<` d( ]d!JdHDhLe3[Ƒ. Ę75٩Zl^͎ %?e3;4;-QؙՈih86YH͙^r`-?FI r7%rpC )tðDqܜu.ҷ0v̗+2pi] 3H5ZEn^S<m0nj~c~5vÚ|6ʻ8v1E;[2`DFb'Ӣ:5]dH!JǬu2Y1}Э8p[4k3ǠA(gM6OȤ~B8pKR&L$*SY BC#Rx~u3CH-1?04`3-(!dg~; #\4n8GQBQVq=G'[0"W)tM3rTG5~ MiFj+QQ4DDfE4C駇g 5gbO T^h$ң-{=BZqqGfpGk<ȁîK\ L M?~Mʹ*k1%&=u_~Sr2!2W+OԅœMW9cIbL'j0w ϧQ Z2dO`A`)^#R`p0Q6ؗN}bk+/sN•QQnqBX7S,2 &3r)9q74!i@u]5ȳUrOna0apʻa5# mPK4mI/R)#8{TXq 9[X97 2rc|0 Ga@뵣[Y9:{g¢aKc1̋U҉,ll*Xk9OպN]WRWZ1,kyI[ 4Kͨʰf.Tz`Y&!i 477JUZ`G zX`D1>GXXqБf!& Mlf$"&k*g*Վ80M@ڤ.Hi8갶>DX1`do&{бqZ]~J A ֐~*B]+;_|֐쟗sh`MgӉ-3^s3.-|B|*^ .D^HuٰT)%Ot_|6ꕙC8t"fW2`Oktu*`r_Gd_FQ(V@j?|)t0P csק$`ZoСoÁ61? Kڅ/ĤVY۳W>(Lנ/\VIG^j,C:I5Ww,0?+y1Rʕ)==9L&𭚈"ǂr #d ׋ըU,?7[g:m"WI,w,4Sb&8rI"g9Gp;QkY}+˵/+3MҧY4jIW%0b\IJ88l吋y䣇\ʽ)/M{ESfc¿'J1@y;R4$tU sfkNŒ#kXyZ =Dby8Ba!$d#;h4d )mdo,RVG: 57Uɚ`.l/${/Z%鯗-扯2_4ޖzw^MEbSMbW$(*iNyI9#Q"Æ:)7di08F)zdn6k$[nL(W)sÚJͲ7Ge Ĩk"JH=W8'MzC~NGSLrTv*v;۩<}D%L3GEz=&rm>/Ew'Xu^s FDF0VV?~lwcg?vcS1k8Dtcg?6?tHd43 KMhEB#O~l2;ǂsc 4 ?W C8Eox.0HJ8e*aN$uN%Tj!{^['R5&PX/Qj2+pH ̈f^eZi9'A w|?Wk1(~ͯ-0a!{ !?chxҡa @4@BKc_)&0R2ID$FNa͝qO,] IB^-[:.0D8YTFL `\_cK5# q X{_iElsκwdmԺ:o#̉ ecE(0AQFx6)єKDҩU*bJY±OvV -yX )S+w TYҔ5YJVB0(V{gMu8ʚ& \.}w-*H$pGi)+̜LhEgYP!R{BTL.bX;(xX]`I 닛lKhT9Ě$eaJPJi+Uۚy .뤉,FdVhmTl5Z:0=ZAؓ4Fkfl\ׂF Y*4p3gtJEF+dBLkD5Fi2Idb-\$@!NY%"K68%2GVxeNHn b17̷κ;Ynqk-i-̥bKCA_Kdy ;e"Lӂ5ZeWerT4s`4U MI'M3 7pG>tVJ`lُX`=~\{ߩ5Rr,H6R-Ĺ|!o6@x; Ba XR{T-Ƅ-SN[3{1V : *UN4æ [|@nN+@LT{'ψX)d$w.Hng6w  huKÐ@"@H1fBZgZ9"<W^bΰa߇6Wґ\Rh8g8!%$4鮪Gwկڙ @ۺ19Cp3,Zth樐7mQۅQ'ݓXvϬ욝=˟­۬n]+ jBA : aZ / sٰ23cFKz@7'(J}9j>/Nu{H+G.Nj81 MeUmS|[%kwO4xkElе03a@)g+(vcCہmO:U/K)꼔mk4l؆,Ua}^[M_ gpͬ5GV&*QJyFBБb3~\qQba[PÃ&c6>ENh3 '~ wilxy zd6lψ#PĹ=rqN0 쯊^z9tv}(MIliaY. tk2QEHOnJ?lҡQ,s /RbB. -JyNj?G^e{E]we[OoGNGm&+N0 .DM,< ,4橮C'dUK£֋/\|_3^"X~čVY.yYbyK/-rƧ+oإ禶R8>aNɯ: .;L}_;V &IbsW92\%SgN?uG!r&4pR"`!DKl0D.N3J JwܡRST s8WQ3mp Fp^98Ee{ߠ<&+;QdOzHb(ѤP=ڰ܈1aD2rTy27%=j>V.y1BJM3ۋlˤ VP@C8ZR%r]fLE4%:Ҹ+حeBˊ2Xp#Q(##@PG`iUJLS.P)jĄЭE5c (cpC'\vEF;ZWl_KeOVUp>Y.f"c:]_‚OPr֎Ȫ\rɞbC!$P#pF‚Yш,[ +hX5QJaԒjVg:0jGAAt %cuV@)* Ȟ%gfٝ|FɐgGX x* NEFkr #RܶnVov%ZVDC,02T+yl4pŕG&FZ',pK44>Bg y C(Teۑ@I(>G #0H1!jJ@hzĺ ̕&2>s_Y=UޒF ~|.{Τ 04jD8c^*;85 A3$h.@W?EY~(6܂l;C c"D?)E~=!lhN1̔P#ss:ՒFK=! k2q95 3}4(1<} R v QrV(k,b,ZIBo!:/<.w6_= d{s5j;l3_Èu滷#͗7/ tJ*:59|iJլm eH7B$"&)R5;zC4Z-hm.[:N}[ QF)8XJL@8w\( j6i"0Kq@`Xa!$* ʜ:Lۯf梑wl^Nso^{Z*Fo_XvqW'GԨu"q歖GMɓe&Z|b֠1.rj ,%#ι.$+AZtވĥb'qr$H,!>kJ+I4ycVQWܨ,pVSc+JhGp}j~HR#s:S,+S#0bܙSAH4x`!d0jb넖HAX q$E~9GW-Z# FE@'~6@H72 Zn` SF7a 5F&E=FF>2wY%qYЎXh{jzuBҹȢyi#8cуccйQ6hctrq _ehFيQSp * EJя_Шc)4xMLT І%g@TgEp`/^MN~+[-IQ#퓢ĬJQ$1F~6+[[Z&;BirVD.2ȺWv5e 4UZQMs⹱ <o˵TP`H%ؘ|0U].M' I[d Z);e [W QcRPb TbP8c$OjAi( ͊A38)2"śL1~W)ڨt\Ys_e83Gׂ>HGAں1fWѡw+h({0F7쐞e%~֠Ɏ@t$F9ԭ6T_ cX$uk`]nlNw(o< YOR4(gwG+ V@+*bd(^;ٍ줜΍Q 1Q5@!!-iY99n#4ys`1t)=±!H9bsADǤ"Я%B 2%ԑ-F9Wwxt٭܅+tN4>r=Yٶg::V()᩵Ștj,DRzjPQ)E3~{cL!i-w9Lhcc,ps:fY -1tl[EI8 =R~`g\p+!pߤlK)Zy+XѶfLJC7:5J~J[:6V]?R ;nlO0vY`p\$HbeG QG Jr@ݸS1uxL*{!DAҖY[ {'rWWTsfyd[NI=q̀i悤EBl8 ab&UR^ܣB*ժVĔV Ȕk):/TB"IH{!=B])}]䌑ާߧ(Xf%5 TUT= _M $Nmwa/Q%Ϩǹئ]>IRKa!;֍Kv)1B[Ĝۉ IR q#sxO]Rħ_T~/uaº-D3O{:0@ !1Χ鷯6rB{Z;"L@oQR9a΂c*NRGGCCoǔפ|́Oh`qWb:jlj!TM  ЕKGCHMZCsƔώαøw4^).7Mi̗[;nȡ͈/?}1C8wwl?Zjc>6?/CRRA%H\D_*0D!2RjٺIrN3,a"85Ep\D&'?Cq/ MmNcZB$yƴ1>jN~ehQzZQ]ԾL5"7#Dwh},̴t.x=2y#1tۏyȭD~yόx~`>V,z6ۚGz='8J}{&H8{MzJМУ 0g;qLh̔vcA V8h 7b2G rۅM8Xb޼7q>ͷ(.n)s@-ɉ_\BnPwNo^E]p=S7/7 yB>}OoOq7qi'SxLiSv\31 RQQ<ۿLg֟?>_~}VӜ|S?qz&ކ?ok>#K[pۯv nxtiBz~]|gk?׫1\~ >N6(oe5ߕ0 ʢ7W $.gJFV%"i#(U1;%2(!(;[uCsֺ&d;7Fq[L2?#n{Rwi i}g ?Ml|ń^fɟoXIz _~Ӹ _~Yl r5'gŻyrl~ zEu3B?(kTz}\z0o="A+u~eف P=ڨ9$Sj]S0PNNkP0HG#L^hVݾ_Mb Q|~%bݶBxO]=]uE 8z~(gZ6҇K]޹n7벓쇻-`^Q %cd0~N7AXOi5QǛzwZ}e/7/A~:\w&R>¦-RO?b>~xS X|\4D;BC.3y81a;?E{=9NQH :j@s(c51TzmC-9(3R87HA9}МWt/p02wV@4 nn\I:z6UzCZ9,Q1U4+-V'[l`jS%éb8US5USZĩ*RR5uT@Rz‘SQNebo"G]ffЛ}3^ O#^ZԍVea:m൝Y3F`Ԓ1^xYPYNu[෪"myx+u>tK+yr~ N)ʚZP\P}YV&h}s){r#bCTjKz-v^oj3[o8`~oΛ_;&^؁!gx_ j pv<:@-#I\Q{5R|$FWi4BE0Y7pm sbq)^.W-hL;sLcd0'#oG^Nsnϧ3ObڒfVP]vwc ֏BR))gW("4Ijp-))\5W`u$򢢖/HPR+S`JLIJ~-KgocXTn%ֹǢ3_0SѬ]&v Z2XE d,s03Kʝk/;):׭׵ `26k"5Kuu8+9y-+,HRw+KanDBZ؄g'iҖ7 {HbMF;;2a(]q8SCy~ՁsAI&㎵UI~N.$PFT)pآ@'QIE2>b sh1qlzelѽ7ߖ "D*k^M씽qewwZ2 `E0Gx}D*e29ܖ9S[@Sn+(R2Y)Qfm77ySy{ϔZTi.7$3Uw6-3kT9 ɻw.M G|݈mrTCgި@u5"Tt w%ІSܭ'ζUC2"-$N]-Qاs:I}++KwK%`蘬N_ R.dUQ b,dUx̊_Gc;1cIٯj'kwr9̈ƍ4 bD砧Mh>Oc]quW¼|ڃ;RYk% x+#:[ Q:H8'0%f]n2h*$f)~`) h"<Dxꀠ)i uQ hJf!HXTNW*v`KWB[a"HU&:ѭ In1I׭(W0'Ohpd@D lI@d'$δ`ުOA(0Ճ&(*p mFODиaRBY#œD 鼵LŹ(1 /Rb8wDS9ʭ͙1NlϦJUn\Ys,_!äeT% O%nwrqRh 9e f-+( /mw',D#(1@y*Fy <`tj/L&x, z9E-»̽Ylfy'(wenE,OۨNEGtz st>Db 9^/ uWIւ;pl.;3f,}&s0iU#+2;żkCHJ4%1;QP$ 645AT#4a.zpuzjhIl]bvr/C hJ1│]F*0b4ף\:12,5,r-?wSc[' zA{>$ZDt{0RObeb'>}xE!<[ʚb- 4eISXS'MQuJ^S Rxb80V|ULuUEŸ#˔jx6s|Drzm>n# kV3o_zj{M#7 n^wOff¥.K SsDaPo?ş!~Etks}b޻4'ۿ~n]~˼S)'28v]~O1_a1ǗoveJn?\0Yx;wt0f?z X1 cʶE1Fa")vY"< NAՕ!\LtY"'Rr,Hfe 3fj%1Ghs/褱m,*X[^T/\+kO/793.$23iZ0fBzDF 641pi\{ИsƗ6mE213jbymf%Փ/.˷~9 ,o6Tgf]X .w{kX[e9& \3k{nFD+2,GP [)O6P\Xĭs᳓S`),}k]n$H1pI1Iz[=x4,hF}y~P[vrV[_M9$)-oTjWlz\ CҶfiu.CYѵ\tM(,|+q3CLaKzLw?lňc@奂eN(0ʭ`p,N" >ATS.4&26[d}}S04nC,Vu,R_ wbl}K~k<W3%n1T09*1(5W0 T\bЫ<_kSH*g0TjB>~"M _,ٱ[ 5ԨhO?ξ)U)URU(yݻuECrt:bX뀤0JޫnW(;FRy-(W)~q 5Ty-Jeۏׁ#w? 9BTYpNrKLXDZ\yH /]= 79αM R/<-d(kcOJˠΑ@a<Vcs"`p( )y}Ks { i6FLnmAKCTYļ'ɻ%W=4}'68q`P$5"e`VSҨ$ޚl UuuuUuU9Py:n V3M!fZ",?=L֔pFf` k#w7SZ8J]LXWϑ̽^L9%j ~nBm("zA-kS΅䅛AV7l5 \U`Q&r/*<yt#oe>`Kߏh4㶉f4m|*ӦQL1a%IC!gyԴRt9ȿLd5mN0>L3V65ji7`m '>e<"-YDs?9YbÏ xp|,]g↷2l,"r S`y]} k\'YNGmA$sQ֞Ƶb7boA|?_|jcL( ,2+:1*w 7MO.ԙc>dx|kO(QdzԚ0=ϊ$M&s^пImJC[ZhJ]eNTft_ڳZ+)l= n4/"Y-S.gbaNs&b*63`Kҵ٠ f-8s_`4+*Vx1B 4,kX(r 6[: mq+;!22Ҟ Ft謸q\앮ݟRMx?=e{М*W |:5ɤv<` - L M~Jx X *Z)DJk%VvzW֎a*ҹeL틟)/3huDv")\c3Jܕ>iˉ&Yb7sMsiѤ6ʗ@DxMr]tۦD~= `\";[,FD;=2^,<]!ƜvZ_3GSLAz'y:Z72_{L.LS]b x(g#qٹ-# )Ү+K&@U~OKڲIV*|6i`JyӇrj.bagǪ7A"3"}#`S `u8;O02k<!]UD8Oy5+ttVJus'(v W,d}iI,?:!<ֽN/OBRiovba%)`sLQ즜0:|E ~lgKEP&:|fT.2˥GvK=UCrt%.ѵnsF\q6{A?Yܒ2lɺȅ3XU(N-2U܎k[E5ωS9Tsj>l,tUs2iw Mv.*vV?/==1$gB^"; pe^'#8c% .rmek>LEgO: ה)л۷q:N-(j`xv~2gӟQ 9F x@jEIg'n;%Z3vQ8, Qh1J7?xf5A~PB^xYZNiIZs^ &"hC9$ 'cӭ}T|yL%Eם1o!3) 2C Q6áGq0wP~҃Y<1,Оf{J26G: {3ꑹ)>aYqu(Ēw%5O:ztreD q{ $"حO[FӅ$F[!'#1I+ 4lbfD&U}DRtZ"AD1m =צ2s@>\ J|SJ$^uPcrX|s,\(grxF-ikmo<|D/ժy/q e_PFjoۓsq7L"Rlc٣V7 'J\x4;.huUǁ !f~ֵ'a|(@[O`%ftdrz/E9b8ER&:^M2_NffJo=R+TxivJOO+b)MߪэBr){ nGfj.5) k^_Θb'ִfHٲeS<3lZ?S0䬬c(js\ukІ%Hu(=IBJhk.|1S2ȹ$.W~T&JH4@$;%: VR8E]dY=U=s6v<Х[ߑy<gO8vs>_pN{E}RA{;.=RdI޶~C8HMQܨO;Am%&^ .bbU]aErj@ cm%Q @/=P#ɭ9dyŧW/W&!꘽|sY[W_BSNTK(UņlfR~Ia q{I0K1 Gf!|~ 3E:*Qޏ$!5UPft&c TsK߆IFU9)X_Oo~80ɃICTNSى>Nc .b{7 ]P_ eԢjeY}\cBwc},6O}fv qnVTazsZ`h>JWXC=Pf>f8[43"Gף__FFAOWq0J¨(:a7k 7Ξk䆠WA &(/^1d@w 3ֆLP `=4g2Z'wx:kYQK`2__?72%:Zy!ʋ7?ͪ0z/hݲmi{b+ܬfܭ0ߏhAo<\w|J1̑b9Q:0|!CiÇͼӄWS`布AWynV7vs4y+O ?xeVmCƙa *6> xߓ*4u>]0vW>WWWχC ǛWnԉj_:bg;˲_.$yQ/{Ѳ~̹p=2/9f;/+_ 3>qcJ6nBJn[e7}2W>Qfv=i՚n:1IWw6Jַ;~~QNC-+Coi!Fhď`ˌRƿ;38x{+uY0.i22: ~&4w#u& W)hj΅S쥥wwcz8_%7@Ʈ}CdN`]'A*RDf8TpN],⨧;_s$^E}4<)xGosMcZwշOdQX/Ӟny]""ts\]UOt>fJ~NsYGSl4fP?'Xqpx0;]]+Ƣ/Շ1* K~~vv|ҥ}~sKMu&9pPe26{]W]W!$mLN}rcV=SmP|,`yceHJXRՒ6R \<ٿvʷ7gr4ʓ( d&HR0_@@ DKT~/ `Wfp\˵w MxblưI@6j*j C/udA1A4vi"ŭIpCD5Fe#xhb%" hBt}_8XDkWIv4AjT^2jJ/E;ޑ&IrVE=g^pA"2,Es&L*jOR[әu"1La*cLpL4iƗ2dH,A*5P : ( k XDg% 3I[`#4RiJ$ڛ1h[ ˛5BZ%Y8Sb2R `J~i"K' B5p5dcLX$Z&2F(͠GGUÛu IDLJv$YIni0%KQ:xp6bGP#x"q^Xsl*9892@'p- F 4I%vp|ܣ!(؆-'-j9'6ĭNc;5zm. Blm%$!VqJ!c[>_Uo'QΫw]_x^M]b9~A? D%c1itr[]ҸF5Y]+U_<)%"EmMuo=]is;n OBbUM@g5g{رj9"j70_jN8Q)*AS88:'5/Wa '| *#W}("NHvᆘ%Za+@F#&qm:$:AV7@Z<1H aE 8G})h0A0Tu^>Z].Z] ?6:[?)ʣOb.usryY?a)ı!cۼ<0wo=֛ϟm?]M6w{i#%/ jtF*7㲩9Sv8tQs<o-s)_uo{vjxBĈin4SS~:>lʓb#<%oXx|}}L4̾!$F(R^Up^wGZͮd G@hڡ~M!TEH{{StJxI: RWJ8ݳe\.ԥv1D}eo|{ϣ7Y0j]OGK32ZoB2H5 K";vlWpNٳ_}V]%67a8c ̻}$B+6Hw B&#aJGuthHBS d u國9csһ` N&6waT`,"RxwZٟ9Mv)"{jRåsfm &.|sN-o`{f`9p sηk0=^ |+xZfhhc D8sɇ<ݲ]w f=^9"4?e7+o`{fFc6|f\[6q y;j=^ dA!hL2C,)wԕIㅐc4修%oyJihcΊS>?R b4f܇u 4109?fkifvηk0=^3t4f^쬘G"x,~71gS$XBYs4Tj:+l9C霠2uycdYK K93TD#sTL9)cP񬜲,swUsV|{"k7%7UQk3<4t6=O\o[ :wM@nď(h%-&xyQ۴K5tjoKmde@-pwsޡ/AJ(/ VrYR 1?j tj _\@HC NloAR]<ѳOt8:6kCFS`Q 1m`{L L`rEE0W{oR9.P7[cKB'qUFR`=q~;۸m=;܍)gJbZ槮S'Ÿq.0)=cZOEjhyAJF1H9% fܥ D8U`tCjl∬Dؗ($osUϔ4x]~:^3mRIQ|i'2Q%veEJ}iNSḾ: Mtb0?A(U´U`a8hp :*Z+l*Md`@*()E,Z eA( #,FۢCtMb0?A(&z"x/qf5F4,1>uqW! $xDV1WP ;BŌ"l,S-QS墰{iA~=RxVy+L:ʓ|Q9)Od)+c"PlqT=%wi1Yc͂Y `HQAWU3yVFy; nĻa>~ a ʻ'^(<=^nLP݊.2YgX Tqpl2`@YY-`)%6D BGO^8OzrɇQqY9t]}h;SE)‹`8DDC,$b Dx*dI2!EZZQ -ۑY(FvG"9eD7O~@p7&=yy{`kVy {PbH!V[")NQ3Дlܷ_Y=6_" %}5 NXHӔ#o!eM'=U"Z P@aɓ(e\^Fw;0~]^/x''͖85wH;85H> P4I6ᱬ1À>ߐ p5  )oH@퓭 ib&N5()ij>SSvJ)v5b mm'%3+Q(g@~Br'Ҝ4L(M bT9 AZPA$8cK !eڧPi z-1bfU" Y7^YL@M; [C$D,P1.rħ40Ԅ3 zM@E[ؽSSSR|(,PbC6h &ij>SSk[qfXa<.k.@"ek`@l:@ )qyKbmJkRC ;2C(WA[a ݅ษv)tJ1-K10Ym9y:%76( ^QF{M2Ta*U"a1t5 7H_HcҫbM7D1ت20* ǘWی)j R #4HT0)QXZ2ň~~Lєb&jX_qEU|pI+'REVk"ldВ?= \ v"].bgLw:ȴُQk6myG6ox=j\Nh|t6tW!жڭB9Ds0%5ks!ҹc_2ilB[8@#h8;ġoLTDu!.j44Rj6c"PlђȘuJZ,?X.o/YUЧ# =XE)YLqɭh|9fB XLe,xX=cYu9v;j6#킋x:I#JK@QGn%2/Ä(:xMrcJ a.13 S;ْMh[?B9Ds0KݨnJ/,HfP'SLrS$ycoSQm2 moG,LmJ΀G+A&D56i Ip`4fs]iT\9]:\,֧; ՜Db@KX3 (Dj:}`;`&HK  \5XDc@ (1k/"Mf){tpPXD Lϐ9^%DSQ YdںHg֔s( &91eՃ,mOofv6۳g)ߪjyLq*f*5X20q*,mbՠO'g'{C4<-g)O5ZǖޔԘ]O&aԼ UJ(ұQv7_%&3]^񓿌kaŌAF0xK<}Mw+(9&l$R[ Rai#lLJ=;;I43 S96APMpjiXqK%୸x5k):\+Uvv~ls4V8. xb6Baʚ NbD5\cD:vsXDŽP!/APM\ap Z GD?^ZҔ4alٽX*|wڬd/ax~y MͲSa+^n _IlpJMs`rWY*t.U VƲY}8(qv1$dEv=Sz{WGLG6,rNG ˭: 6fޏВ>2%PVU)a{[[aebMX/Zծ:-27gmtn<Z?(1fvcLef0/э*]Ri/1y,,(j {ܾO?AɟF700[x2GZL4GANm<]Jv%!BN^mˏ;j^D1׊Wic| 6l%աmI1їTLNm}WJK74IvJq蟓ْ%xxz .\ۧdtN:ӟ}.AA_Hv6W _M4`h^t!Y\~c f"%]m e 2G>@rVQq6:9&"WJGef9FQ缏Q DО,H)!TI[E(H >ATZ Z+ƚԿiS½2-|Ѝ3{TC{2͘5C.? e %SC`'b ` P8 !q=-GQH! WjY{FC߾ `$Hn,"){0C6 )hčS}!VZĦِf$l] $uoIO( LtNĦXV%ҮW o\SE&jWb}Dla<P`L* Ӣ6]룮s ;'bS,r]ʈEjҧUF,X8Ekԋ+'+r"QC} Al8KԈGiQ|m\6ժzʒ #0萍#G@cS>-OM`?V3iRP<=j&9J0jljzvƴlzZ%4 f ( :Ĕ\@={!s4ٸC^N6g%Gie->ϮMS7˛9[|˙MM]`ə)jn*,5ccgY,cW5@}fVX޾~]~9 7QKF.,ʢ"!l PA8 w:E-Ba|k8~|~7 c Br PV4ZA؎6ШFf`0N+LMR6VJy\1 ـDdZ[6ʬ$*!6#njN2Hu -JtkŲRxʳ+ϯWhEZY-}Өs K3W{V1#Ĩпkymevj  s"7 W,1EGIdyFOfʅv Ht: Z'$H]Lɻ\[3$)^֌#TbCڔϓSЏa#$$}L#ixZ#=&fu6 1=iD=mhS5h' 7[=EX%bUX,OYBҤy 7 _=9_45ׅ, @2OJJіVώ,px DCʮ _}+8|O+`'%tղQG=BkO%YULޘШmgiD|2v*$~CI]$=tES^f#Xg8dh 9J(H{jcHvT?5oj{utp*Ť{޺m֖oFc#By8Ǎ\p'גKc0/U(&ޥL>l8y-Gã!FBY#LLcr|+k\#>HsFsbZ1 h% ǂ `9E9㫠X+5RI\Uw$$EM\3S"նu$T镭 @:"Y==p( ^Y͏>s܅&˻_/>}~x`^9<}*nOR K۟j=LI<{^:C)8EK #;kp1JS6{l*}QJ gBHU$G. NIl f^e`Th0{iA )7L>Rus7ֈ 5rʌsSݹsٸx"{̺q?/_ lHi%j{a&gD&+nH?|mJKn3A9YT`O,(^OccAcat+0vz`2]dF?jAv?ݥ"}{^]$fq!3u=|Px_Foәr</f'7p_Ak$M]f\׌UNa3WDc3f߽׃`1]͵\{ \"ď7p&!,}_[K ܷscXt)wiN63FiĢ=40 K7se" ,tN8H1< &O AQ-eBʣD&FXQ9A u2Tqd=yR(A;ͧ}$v K7`=( MNq!.x|`7e;J2&fD+MÛG.\u҅*O47O" 1x5z}`mH^ ΅̎Ǚk  7;yjbbÁɏ&>3V땖Lm5"pt8y$)\H4>~xSOryVW'LXƇtsn*:řgɿ6^{ W>r8Qt֏Ņo}3O 2On j8j.Nӛᣚv~{QHYLRßzy=b -7s$&-4c%Us6p݌rFLmAIlPCb,&RFɯ=MI[9 4k?FahAB .sX"L,2 oއTAsl !H"NhOuFYzJ!5 9*$PQ8EZɁ爦) ]i UEX1eTHSeKG" Id6~BhAphZrK͹wJ@:m.oBw"B=P^[.`cXIԛ1'ӱ^)-zE?VFEy`MR.M2 PqjmD)uYGNK *baOEˤQk"#Z桭v?O_œV i>D^k^WB%!> *RC 5?yO-GN]TCΝ~=ʸhx;|}Myo,rڞ 9f`.5mKTǜ넵dX6l\''"e؛A:AmͰ2\ץ֬M"7q8Sq E&ˉ P \@n~|Q&K;?Hh}Ta4yZ I8ӏ3}v^O/G^,j1hfZ?ڥ^-DKs`x8XY"QyBZ9p<|}حxo<_u7_m8xslH n n2TQB$1"A7.VcAa! b!n0DȮ Cx2y⥜2yR7Rc044Pn0\WS1b \0>1g,R*>WB.T)JbZd&Z4)b5Ñ("LU=:/+WJЎ\c|6'c:-6eqa&;gsWaEsX("jJ,3!JᘹZ /bdGΈ9lbuq 7ŒkB $'xްM6aI@b,)#89s1̄S!YW$D+> 7}7_[:\*]hlvIL7O^ ’?l5z}lmMq6 XP OݫAg]n2oag3cX0şj2q=y _`A8><1Kg?zSOR{p!$$;V[ -jIN:~m\~ dΧ$ADiYڊ1AlO`Y17mzqj]~ian̓(4OgIJ!l{MY@b|u Kݛ:._^Fu8`ٻ޶,W K׾Sn41,=DRHʱ{>H- .$:F QYNR-u&} xrsTop׷`R\j e(*.Bb_lqBrvn-t;#@tߥ矿 eTƋKy:b?<*"(l݄?,l ͜@߀o`0ߴ''jvՠD6weTb歟5K&CTgO>uBF?>Ϗ(ʟA'BpZ$||t%" ;['9AܠN</,jL>%;+k|| A#Pa_PQ, E `0(%g-]턦i"m9<DQz4S*煜ƠpT@˫,Ĭh4|2?Xػd'=JIܣ=JIܣr l|SG5.r譵9CM`+BM fk ~N[ HS'xZ?heXܺ;Э~cls F/c߸|v|`Y*ܹ~Ezf`*px>[Gi\$ ]yѐIZq(iQҊV,3 .79hQ!ޗ^4]sتtKB[OiɈ ǙӘ0XHRdUrl>20s4ي/ŷ7>C D뛾:M2F pk6Ȑ^:df~󰩇C!1WY g*ԿBtr)_ X!.X$(!a6R3%rW$,tgU 7^jLjq9(b:5@1f]L2mNyұcyX QZ Qy%|'HN!BAߙ(cڻ дʰ4mf2ŵ _@B>dĨty]Vʭ(KvhXh5y2^6=θ (hZ/ 1*bpFI<ט,N'b$C@/ýB J l-!Q9RkP]B:ZSP?P1a {8xz0'*@3V"`&@oicnSBQW 9"Gyp a '9 qH+,R3'eZ 6RFRH◡AiY&:&FDu"8)@vVDktLNL ͓Jr!:c(/{/O$ܻtx=]+9\C&L2Q2"4JG6_'J*{:(A07_RhMdiyD|0tRlK\hŪh:/(0{Adx퀓H")"۬KQp0: Ăo P"˺Gxf@\:ZPTWeD~ AYV Xkvh˅1˔! /ÇMEl2xD$ ܇HKL#m6FƜ,JJ-Id}׌U-?>Oq7[\[rj @nv7]G{B)q˩leFTz͏)ZS%.X8EY_aD]NcHu.[؈Al\ܼ}|8 t49ƥKp{ ::E6:0aSpċ?.XRBnv 3UUŜG[ZXU&:U=2 1p_0baHHo2 Q xUB/4s!QgaL23c4 img`zhdYkEf֍'jF1ֿ\{m2Np+{Nk犔-4Y2WkUZ"Pť2D\5!_NT;Uk=+͝JkrC5^e sp^ v[d3Gy5@11CCZ\vUqRA㐑K)d _.VZPw1- ia&`OB&ҒCatW )N|0}O6/נ Oq,=WlNO0-Gr"(f" ~5` ${-bwoqz]Q 9.*xR1ǿ4HD&cv3gS-;.%ֶ\_zB 5$5[sy9{\{eX\QF!ׄ9!ߧ״Iw`f?B~4XI4oO ?;|]+z6?< $O \C1+!Z̽^aUf:es3 .|4+ y5y _aZ{=k>iZywVQE׀ZTQݍ?Q*"JЂnXӢ=2[(GETCEEev^r1sYk T"8 AP0< pw`;mjzypcUEc7EuC~Cp΄äFTtUlZ[ƺ#v3xgfPL{FvBoO8.Z Y) V}+JΆC G ᗗ&$ 6 I*,Q8q>P-UڨETa0-h_ɗɗQ ,ߔ:wЫE1ۧdv k>鳨5"֊tɳO4"I""ds my9_*ǛIph $.fuo|U3c+ۅ%4 eMnM8-Ejz?!KmK'WNP5y*m Sw2>+wxꉀT?߹G h)vjSjV`m=+Bk+1ӕ`|zwwwtewp`fY _1rDN'!|T$Rk٩pQ f^K#ID1a%.eܨX1P&9ƽ`VaoyEy` RӊKt;P#UQ -se8*RżTyyIQKcLN1%Mtlnד18D.q!txAt9ءHd 3޵8[bKŪKa7@mPwb_RlK6e5 .}^wu.q4cA -!"()-rjMP.P\ҥ5hhqWO#R5F9aN* T\~7xd$XV J#ҜUYEkɏQ+!38fTMc3颢K`lGQ f-ʺ l`l&A:Dv-[D{oԿR>) ~d)ʲZ)~bgV;wघv6}Uϟq67U?:VF7u/(l9)}f<4>0$줛jGu˰ݬն<"*M-Ⱦ>3fZ_?G +O!vBx5Ջ`a#J.?Fٝo=/U6֗"R NuPvҎOB\ODɧ~-UVݖQ_o˿d~^;y^ @= j25\:Kq-u5d @CXu6,.2\ryIBr@H "Z`T)JIʬr`rXd-ulU{Z{UScck WK̋øJQytq#gh^ܵGQ0 Zo<Q ўHD F#Qc$ɻ0Cw":YHH=Ѓ i9Eشܧ$w׭F΃)D]cf2}s.$I=ژb~ {4;ȀH> rR \ e RHō !J+Q/%0GpGJ S"ta* ThI (PR lA8 a \tSK;i'G'`)p8|/t!nqc79GDp' OgM` e¤oc=M\@QV~u}Kjv2 ;d3EoJ?[[6۔P%{f?^$6X/oܾN5[J5-dz}[}Gj?Ip)iRLlүO/;C;oKc[Miټysn2UBFNQTP@IL&1/3T ASR.,GoKi2aSp!<Nۧ tZx1cN[Lsz 0Id3햼5=(k=0viר/n֫rLeGNOWvպ#fb6uٍ*hiYn4*,Pg"`-XBFz1#.njHX#6L*b@j\P5&҂52ϕ1xA )V6s\s4PTE6`~~sR>tg{/~LQ-%ewG/6ɮԄ>ߵx5 ŧdޞyi˯nW{jwpw : 9c$WN7^8?]^ufI Ig 0# ^5Vs%F!96Yf~s7^W?bMlA@{urͥ7r~\W6fDQ(ˑ1?X+;w hj‚'E{' ܁H 9Ntt67#GF)H:mYZ<(;B Mo%/z糽)V: N^Fm]7{i6cƦQ6y~PWh=H1 Ny1Ak|:?DS@ˤ2yWRXPx7)ǃbڤP NOW`. W6a &J X T!d\@"mT@(%1kA^X0%0A㠀VKP[Hnm$ KFӊl8l8{a"~;S">_hFqOo6#6?\vlzNPIl!@U,Sɼ\D~Zlvn=9h*! 0H4s ʜLa>4)eNe~w4OS8M@:>͜fuPegҴ %B;^'AsljXZ= NYE]ooCٰ (Q 1D)&$(Ka>jUQЬ\pH 5vk%T2r.`,R¶4('@RiV)+t?Suv -s>-lZCL-w4=^iׯeUcWoN['BN*f풋x_3C2&+ 4ͩԐ!\t>nV|$%Y#92O?3v@NSXR(b= 3Ma2kd8k?NW{Ҷ_wN=\>eyӬKNcs3SO mWHhxhU0ǮH`Ã􃒂X^=dnVOp;Kki=]ڪї]`\XeZop'GX~9- 1bnq|pd$jǸH?.|K/BhV6eB)iηq8 ̸32R*32 +aP>%Msh8*I8(BCK #"7_J4y lrJbթ<]@\7˼[p(:@Ahԅ 0.GJHɐ,1?`50(3׍|z.ZeŐ[OQC (K($,7D?_=~傑R9 L +@J@(;3̥9 j ,1q&k\igp Y)`T3L !hA$fvj`МhMY%d-]}j1亰[\҂ Y8'RjHNB)Z 2Kx.ҳl6նGt1 `ZJa'm df=c'V%0k5.V42_WX}g${z Blͩ%+@N)@CƑ.aAI EEQ,dvX:/'n`4V.jC1_|}<?>]n,22cZj7p( 7T~O[a}eJcihW֝zJT eʘD4P+AqMuKz{ ׺Y\j11XcԒ;Fbʣڜ J4m8&%9LDZѱ,升3!dH)Z88"Cn8s )tV? a`9Y2s /BXbc,.jG=¿ 8'XnBxruB)ڢ.E j9o/`C: Hmqj mnzAjGX =CBMMpMPa39`|z,Lvcg|7h>N-u`L yy_~F>so~}͎a _#n0C\yO``(4Ŵ蒸ׯ{3 beKk!vx"N]`<\XØX]̶E㳣$fePncԐsַXdϗe 1LH:?A`:jXM k%룆YSF5\qX˘a'M~ɳHǷG 1!Χ/>=j9O:C/,W[`v}y2l3.{51X$#f{ԋTQp :]wܐg,V*`7U0fyB-:%3|N1yo>m+dJ$^N5'q؀ACPݑ9 W6J^Fܕ]07~u8Wh8\ІCݵTw1 qpPZޙ=av~W#1P۾m%mWwv52xMubDIuu-Y)Y=a4{zmalCaOˆ r12P+ E{ Cx:!2R"vM?yZDX͗"yn^fDJ܅Q !=%P!PzЌջp0dhP?Զ Ok(Pym|W?~ tj}?VbXHh`)2Bda2Rr-:y>߭(ad/B;B{XaWɄ1q؅MbF=^r6.Tw"E9vt^7"⯒sRInAܪ k 3'<'AYJƒ5'5{5ߐ#dK5>higJ^4#nSF^7tsڴ>Y.-*Ya}djNZ(Wɤ{(}Ձ5!A^܅Z ^h%\%'ǹ_'Ջ:#z.V%N]%AL]ɕkBv@"\OdsD`aL>)0,tBÕAUO  I.+xN>rNKzPRIB<@}j'#ѣuR(" U'EA-)B΂!H9)," ,4Dl:'ApPLA饏auU8|ӇTOiw q3{ye8;9{0$"[szfφ8SPSpc/A)0-ޕqdS |[ 뉃8/Vpc T7Y-5mXblߩSUuX6^ ]A/Χ-\?/ ~=ww $[Of˸_i8`w Z>,C [dBPweZ/TXh%}$,0Y(?*|裎E/GCYllR=JuGWå:e/COc9E;pH([W'I#v'uϣ TQ v3ȉCـ((QfDYE n m=+%NSB5y-7尌2TZL}Վ:tVgݾZ-cеT Ev04b>e\&|W-ލg37рCÛn{ 1i&\}/sJO޽1EM 1V-ycMz Md7oķo KP(6&l˭&_休m6 >B @ӤsS&ӆH;JXc][.hخ@ NrȲ#NBP79FFGYpZDns  Ak#>0i"|gQZ #)zks(7a[B,,p A(hRFKRrD/3+E`01O 5Ӡ* Lw|$9$s1' ۝hЛ`"% t OT |(@rz:h9 4žIHrj,BafG`81qTYK= H0B-=2(;)!O 9bZ̓`4o mN/"G,+K lj_6EÏ#/(-%yc2?)/oΥOX Д4r;F9^,VZ^b;M 7~h #=}9q(AE,IrjI۴FzCnu}7~ĬǾk%l'eŧ{I^Y=Zxwƍൢ{16!0\kJ=3J CQFsc`G΋{)NK%Nbbb=5^NŊѓŠh'ˋ΋O+uŪ1xq9AX#|bbMbMbMbxeq0  J#΅ 8;C 1F}@"0-B2ɄqI~Zź|_^:]^b}Zx1`OSk}W^LQy1縛m6Q Q^i~K=b -)ctH]GK8o,`vj Hf{'Dt##A;71*#W:aQTi=L ăAig ىѽV㖭gBz[xyŠZLWsͳq-vʩVF- +Kg(n"ſ^oSĞ:j$lV ܩى(;%ڿՐg=[jX3۩iZgs[-ֈיVK+s\k&P@.ӛj,iW0?vaGfK̥_PʇndLNckM6olffg[:,Oâ>Yby0<dIq:x>)&Toh6,\կ 9q))?nCnQg@Vtk-[r*Sϵx7%;A}.ޭɼ[lޭ 9q)=nFfA}.ޭq$S̻iݺАW,z7ި8r1h:ŻW -̻kݺАW=awϪB&l3SUmr9m hȉhNʹC лIzUL Z f>cB;U[U4U4K499C# [.mT'x)0nnZz.4U4GF?wXĸ[.T';y*N|:w@CN\EstQvsz7aanP#6d宝m=!'Y:%ıV;k/w)͏m+u]@wj `C5#el/۩%hm ;Vy{Q[YVS cj /}fye@gY AJԘ; HRc.5N-A+kԘK94"ΪƬԘKSK{k)^)5n-c1kL )5Rc0eG>hԘKSK3ܞRc.5.- IƬ \jZb=֘5s1wj R1k,5Rc({Vv5fM/K[K` }֘ԘKSKJ_jZSjZǰ`Rj̥ܥ%0r5fIY\jZgjLRc.5n-Ajz|5fNIY\j̝Z_?J$j\by|5fJ$j[ǬEsu1s%hpǎ%7Ilj{9Ϊo~~y÷Ϫ>LRKIgp~2L7U[9|L~/`}йZ0^ÇG5dLb,X!A!7 yaA!A ұwAqad3rdTk9~%?"X,qJ):ri h<"G3`3D)zy7 6ƺ0:ˤ'q0a+&uX#S^B4XfBC1`#xDcJXg d(01NI&"U9Ob !@HUp2(xLqbY x*=/ˈCp$1dAHyǓ ),—fW{3?#u xwRzWthE$Cȧ["S;u^윜IҲ_ t3z_5.OW7aQSG4u0dft>T+?jۦ pCb·AxJZ68c{y6w8<{kN \ $C4I!9Fz.у્^:̙ < Bvo!Ѐ5^* *d.1E1w nɢ(H=PP * GN]F $$@eHIR eA(3l>[- xb1ؤmf5\#m>˖ g"ǟ|v560q/sޅz#}yeӛ6osvWsO'\Q뗯>fT7+\q¢s2}F"21OۯJp^]_[ [L`r3hE~ ŻlG jA yG* ^,ͣ_+<mp ۨqZ[<6˷o:M"Ӽy#hq-JI%݉&ZkOԿdFw 8~;6x3 gr?p3B3 >F{_ %桚r4!,AW9&`?~`Wi ׏utjzpw&qHeh(dXc-%ާW&9||͝BX0T!3\fHp:j Iaw4]'Hfٖd9"8ƒbѲ %:]%.'Z6C;T) D(1 FD`4aVKOM\4n4[FЙLvPt49CLDg' %<^M/]J㓵&^|D@]m'Q7_ޘyIHRAP&-Lځ~ }Bn1 Lrb YjEri+1ſm@R}Mm2-d0&GSCJ4nxqsLvnj?~KlOpK [9֝H tO,Q5ȼ.Oina}I!n^ Փq!aVdR-f ɇŸI P}*H$z-&hpo/T3ƐSzl=q(xʍi2A5Px hp$"Ӑ8we͍H02k~ptk6&b4EtvW0qUq҉"Ȋ /}>̠6020KԩHz/X:ST-䕬5;w5UmLM\lYZα+b$ԗBH*-Rnhg<$ JG4zlZp_*b KjahIgdFqp8x/UcHFcxMp G>ː&l_;BmC1*BAx%TDLڐN]ShIi5C6dw=jj[of2Z^mOb~6YOҲ6TB}#˦ܢ:9_O%&LUeC&DMĦxm;<}? ѭTYlh[焲fʶ`!DlJW"b[( 8P ͧ.gn[6%2mJLr YMVc5yNVJe \:/X& Z @1K(baT#jITr("Eo[Qq)R:Fwq]g,xOF\a[nK {}xOUW?}\^ޝ-we>$ !:,D՚{dv@R;j3u~Ҋỵh| l2w,0"Rl4H5(ɫ_.ujx&8(LWSMn^UswՇKUOoxm+-$ڨvS0/簘iҰOZuZ<ՃlᬖI]Kb*EiL#[#oZB UYy[t^&eH@tZpin~,xMbD'j'5=ryH)|=X@oCI٪!sOAY"i_$nC3|5_:SR=EP&| rQ}T9~Շ(2IG} c1kD:v+G\M]([ AMƘXZujwNSO=K4i!ӈLlєCPg(9`b+{UCԈ|ymni0S,~[_'?ĴV*^k*jfNo yyWa0I0ɔ'VR鼐"8 5i ,*0I*]E\250ƏD#͟O|'h)hZy3鄤8=) nt6@cd):P/e0T5PEH<c+er`6 I)z?KlलSΥNGf'% Ũ;^ GTz#c`h*Bt :9y~M凈sm/ HP!cҳVh 1TpnRq|kӚuś 5#[5vNZdLt-¢1" # ^V"2FNJ0=q9 *Hމ D:']!BŇ"hXm>_OCAB X"S1{ʿ>-T~b/7u;m*:}L{bI”vc ' O4!BXT(bBzoo!.*ULldH6 ͕w =Oj0Bw<39?0"%5f#i7?DB$C(.@Q!z:uC,p!ޖ!(p!F-C| _J]Hʗ3 oz6~7tmA~D,wzy_.M/ZG+N w167u/c%vbпNv˕ i'GsޯC${&ldMLN/Og BUix &rrI!'~[Òs#DYB##C#*ct+$ATbGɿ4o@D'%!M/z9Y2ĶOSeMz>lw|ypc.sG(p-"<0FPL6Ok DM{ XE|Μo$}Z )gwOAdg}o%q,XjqP{Whu$x) ]+sFd>Akn;) ͹[``΂v{ n&7Sql3\E)Gw2$9=mqD p%1,;k6C| _̵x0I9CGM!X:E(ZS FI B@TL  1rF*#v=, ?IQ?J4\};V_lX"{Z v{Z7i$NR; @Z`CpG$zAcԠ`05;s KDdV鋊J_e2e5vlSleȽ52:gрg̸C+,6 NaH6kIE717clf:@ ?eRL~_n}L<[kAT/d @U\<`MRcL d1RLC\^mg7Rxvηdh4E*$>QF<5B!:ҩV{({Q=r8=EQ)ۏK,| ȏ$֎-9(jq"̵]j\v3%STKK2) IyH?{i=ksPCn4W`BXx|kWTJHئBy-P#Ck$F[Xf}b& lbշ{p0Z"Fn(ꉎëMbH=6okm"k;U%f7,yqS"D]/tH^vh{Cku_P,Qĸmք#6RlA ´Fa%xU vc"Is&ríIU)r㾧|\#vi(vr710ƵdQ'> n/KJf;K[R1Hm wz:ϟQ'd(ʂ}[Syyq&VfPٵ͜xܺɫs CͦMp0DdnMKGI#w/"}g.kP93usr=yb^b֙+`5E;nvsMTڝM%:Ԭ\jw1_۞IY%x1Rv{`Az@?xIk^"6y2lJFfSL2d,Qk'Q4BS춂E1<љtW?/K&Y+s[DI ,[%9gRY-uD&z{&/酗[JP+tDeŕ0 E]uOn]3Μ!iI)U{#An>Y=7hc9&rT_Y%)dc˞ 5|s%wWQ{n-zYn1)!JS߻ʧ`tvNwjV[EٸE<#0)N.MmO߿(Cq5W[wyͽf(x["'0Of:{ˆôf07yȏ%iC^9jpO4h>xȃkmE0EגyM4@Z8$'"K$'q=,#͌x9s1+ZGxD#ɴ|F:8y ɗ˳jeMn2#X3TGҩH6c4l7ˑݬ"ۃig CSDS<_ִ2(£I$E\Ą=YJj`K?@VuUDpAo5HUqfc˵FdO:/6Q3JrŃVy El 2#bg &R3zT—#'PI))<,dVv9fCi3O{ 0=[*{,FJ*%ɞ$Q16ѵ%UҘ,Rq J׽u/*ٺl0;)NJ~}3( n9-v[1F)"1Ӗ3iCQហ+i"kP6T(B`vRchFb|uM[d=FugLEW=暿:Л{bs:eb1ϴ{'hN;Y+1!B(9XbCc 2!D#C^D:=VͨXY44J}Ѥx[.*{<אe婫e(l.+H33Zy2hDIb%FWcMU\qubgg :D*v2wՏY h%TaWiٖ֦IkDh5R"Er"EZ :L#)MSe4cPO6S^IᜍUսms]QM*y= \nWZIcVHu N >牒hfct˦7Ao(Ʈ1"B$ؐ2!O6jZ()$v -{hk\=3Nome<bn.X! m}NW<̀Qzb{զC cF}X&gvp2DL=\%0H9BhOgcw%-zju|E79-\#N|hhSpB ӷ>89?pmÛVL\!);:F]I{:xs̶p1Gb_hm<F86z"1d*[GqA+`D4)狫#+=NiB]6eߝdƾ[Mg8K0A:|ԢQGv5,<<}#;jFpZ' cFp֧O[uwq7ݫd !};n7no1j4[huwxSеe~6Mxዞ^jrJn܆G8 mwf*((qgM;6?L^87Tﮣzh46hCfOO Aod]ᕅby9y\EAiLF dFGЅhdHF#H*|QFy}(|nxxM7ҫ ?r^OG&i C׵7(PP$WҍfzXuk|=q.tP'o u9u?8To/gPqZ7cG KһoI­ x⯒4Nz绬NR6$W0yh߻}7<Ζ * lݩOÓt콯&˴&@L/.o @KCo][ O׋x1x30>'-vz}Jzm7߳$3C^~:օ/%5Zmt 7LjG1UY#{+-!fb[id ވ}ԙ9\G?+D6Qa5LQ#&!kF :(L`:A8ѹE%|| 6!\Mo|$ A6L9>9.E/zp%0QV0?;hh(Eq-qſq@$l^ gUNU L_*֨Di%RǓFPmkH}@ َѥ黻q8/=9e1VBu !$i[bb (F?a<!!;ݜIOߞ\8gv~=?wۏW R &&܀Li G<"2T(4&Y&bK˄題0eKIԃDVi>6K}wɿIa#狋W8lj 9ߦո7;?[PHß0p$HUŋw'_=1->p{{]Xq60r 'ww;pk@& @.=]#?Evhd",6To*'˔s<8`c״ݥ[ϳǴ-GױPۡg…?;yFrmL M%xOo-xp9q--A6KcfW&B!Z͍T:FaG@ ;^L$$Ǖs7[}jf6<ߞ98}yɫ۳_N/~>]?~Ձg'/OXH-M HZ yO*~׌nW4`eOfoe%: ^&QT@ryH5S Eه2=>&$|$Ju=$Sܸ5j[ Gozv#؟Wg:!"X'D너`lD쑂s,5CDSm75FY+ D8r)X4BG |u<DER1N`'D?ۛddBŠ0ztQ\uӷn\ $cһ7gTI+ԜREӗ%UsqċOPȫ!l"Ҝ% ]q tjc֪ot 4 e׍ [Xp䠇 &5(u-` To}H*TkUrRD9B-''[sD>H;Y4]䈁}y#"Q5zc+4ab<$H:N8 d Ɏ.&ThP<D4ֱ E8?NXK"t-Qѵh7l-([ބ/ 'i ߁Y^Xcw"`YF]Ej-FEZ0E~E*I(ŰTg] d\Zo㸾yGj0 銔#RϚ UÌ@OT`pV#8n&̬ozʯvQwXXxb2 57 ])RlҼ׋l-ɥ@Ư̈\;@30-wMeVnL|F:M_ h"6,yT B,U#esY!:ysN@+!o }(gfX3kl.צ\Fp>@mijۅ Lbg0ƕ”f,`Qu0h2>HBxo 9?`)a3(lUjdy wIUdJR< 0٥ s7޾"!;py2Rg<|{g3r!S`eb"v٦ LO!WfaϺy@w9$.W Rf l!&)}jI%jq=J-jXUC`m d9>mZ-Xmϲ~QmX{ ۂvJA*&a`wY%wcu.%( oU9G۬׻ k皍`nwtfgIP e`j߆޻.?u(\ fKPndy[mhJF6rʞcN9L*1QVOBP &%i夵^,2&PBp43]54eq$0U ET#Ԑ2R(X#BH1,^ }nfzL/͸f.=vMр h#ʎSO[LٔIʮ|p$vX,.U S19]K[ I+DdVawMXҺaڝBYb}w{0/"jB}M9t2lبg|2J19w^{lٓu;0Z f0U袃"s`*FnXU)DUx&_@ ' 4+,TNѣ!`0'00PË$&2-4$Z5oŎg6g驈F?u_M,q,HU㵘{&!DgRQ}a?n8%>~2Q"&27= z9vs>3FZ; 1o*J FmCIAI Duph9D)iX9_'Z# kMbB3_0*i,"DBYM%s }YYpyOg|}-O?t.])8/K%'?_}mXp.E᯶}_|+_Vq)cc Eq .R͐E("e@4L.ZReUv*&0j1g6;b#LjYn-hBdi1ҡ*Bl7g!e$$w,W? L|`BeL#=qY"T fYr`4CmAu]l5嘚!f!4*'KHuUl -^5Q,hA͆"ZP>}m,jVU$1ȔP($%)'Bc7( PS'QLR6+ kǩŒ 4oG+ !H/61 Wz3I%&Ў)H<}KƕkӬ*M?(dt @IkkM'ϱ5}7O&`ѱ `yAq7Q>>S UE7r#1R ap)wsJ'Th\[]   y~ "e9]0J`z, U M HpZ8"..+ 3벛V\Oťrbi*_7bV摭#0ő͚3 SG6krBeqhc6RI/:=LnR}fEM0\ZH+pEne!4)Dm@I ρYh'&䘘@``ACIJhX B!N Î'! KL7eF"b! q"Bj\1]4YIaH>䜔#X080"(EL1VN!%AؚX""PXsYR.1*u)ҟ21S,0+OJKV7IEղ#J.:W q Z\` "ǒa]x }m:SXo4YLt'6A+2Wi"qb,F!͔K^fbLXL \C7L&*dM4(XLhBHbOL+5*p4&&]39lM`Uߤ~7: nanTfq!fNW2aϝƁuk5!훹Oxos~0Ks/5y/ѵ{أ7 ~󭓾.! rϽݼEoH +O F!tK|yrcşr즇"|VU%x_/_W*:;9Aq%@5 G3= nob<, `5g{ٙ`0iѸLo{rR>IW.F6 ԗK{[/&5/}~\ol#w0GN<.c2E ;iOa^s{PJy>#I-->vpSXuRt}dwô6+nL>7b=xqRϊJw\˲<2[ {VĻGq9>UWWWv^'5g<^*XRJF9C1D;B1`/|lGN|A; :OWEeB^H' 'MM ^jShҕ{!ctnh66Xכ'OgrWzRҧ1Uatrڽ9H&eDOewn`M)&\RU2HWds}Y0škh cmeof_SNC3:Y8ill#iR%cGOW[ܒH_&nۃl͈I\0\!uqmQnW j`p_Oul8j+:yQuij.^%tRlLAM9=Q:yFIj粗'Wva矗AJ--91;-P >Iy<kuva%Qh_NU]Guǫ#O] F1T%>B;O" :EERUZVFU.P .[ hą29OFHRET-#gZ ݾj`irk.@՜feW!%m lX``D+s2×/iޭ'44 ˞fo4YVJoGoOw;~/ifqc/y_岓I%EWf _{ӺyZ&&AeDi/csܽHc=Lwƹ,)rzܥjBpnb`UOz2G~Mrl,v{Qg8N`T Nˤ FΨ6'ݷ \俔dz۳{+GPwMYB;i~eՍ| Oe[Y& 2^wLNe rzȆj"36>U)OOj$<:4_mCjE׷Gs+0krNO8i^\vz~ >Vu0ȂP7UwpW=>z>-r{]'vf>Q|]/ptM9H|@lدps>[pwd&tS.0|n~٨sz}.Nu׷COPw?L>¬|$/͗]}?@_qx;ID?8mzv )J:}pϓEX&oː,NP$~;OkYu%5m!t_K׀})# jltiuZ<@X:+)@!/.TREu(WyFOK7zsR"%B,Mi"7&|[OܳH>wC6eƳڟRfh|Kڟgiy} Niom|=Պ`@ėe3s//nM"tahp}>m>O>zAb8GT0_Ȋب Ţ(1$s :1Fad"Q8zَJ%\/ νĄOREdU/+ɥ꼠T}zl(IU8j- J"ņ jAsI96$C)tE̫TKE- ll,8s$ ׶yv!`Z>]LnD9b~tҤH2(:fm{|JEMҦF.zU}M{Я(OY aM@FI\h" WCDpXs%`i1IK[5D`Iއ)¾0eVZk)//Mg7bUc( -8ʸEAHu 8XʊD;i\5-CWdI!]bxIEF[JZ%u*MmGQpU[;PʂBqwFRUW׫(L%yM^k:?F9a}5^r7,8)Bsņj[樿_xW A?oV`TNwRkڱ]ǓfJF$o?"jX5V>P"&1Ɨ&+* (s roPv;m"tA*N@fNqCacܔ]BSv&0+ Y&5}R b,wn||;&Dk̕Rbcu ;/,^3QL;knU>nk*^j^>.!+v <%q_)iclmck[;c˟Xf]dXi"4bbCD1r XFc* <"puSe;*u lOViwmmԼzRnٸ8_nR*6puא8Oo/9l'$N{NeA G-ع:og(E>f⹤Wr9Z`pR||qf),4")j.xhe8FXCE,Xp% jx/t7P w6.^ y9R{`/(9Y " Q BB{~ S(ɍһtDAHn\*nF8펓kFMJNPbhTFf ǟ@.5ؑ2-젢hhl޽.v Z[[o< ~` gr d鏙iG.GVv?ET1̔VC0 CAR Jb{.ϫ|d*CRU)AP%^ eMYZE%TGiyeqbzYHm|%=w+UI&˯;q>e\^L-g߾(gN:4BrSv9:e>^C℻[E%fѯ-m. 4 x &w3!͍ݘI #4jzO#wKL$w;:a_ T629 UbNC!g w>KE2 O"UwKL"w {ڥrAK" IJ@QIt5oM_أpq56;UcĂhzJ~K5Y1aDž5nm4>dݑ~wIG8+Fj4s5D{yxmst>BR324uG<|uns>R7eۍu=Jޅׂs̱qbݻL^;2"'{Iָ|Xky8?=ѦhZn uj=ymQx(xsUFa3w5-}w>_p]pqm;*cw.:3sЦ5c;D[aQG[LlgP̺8%GPN_{#(nxGTfTƗ|o{:}32Cza)47Ŷ2 DҔDi0aX"2ApD LfƸI RP(iHk3qFZNx] n gAXk^mjៗBQ(iZBf?~1WwoB۱k>6j`,~si.X_ގczYd0zriS@=}ٕۡb^ݒREkk%׵J#%8hZlo׿G7dMo2qM|w۷ > &ka'ͳww<Qiu/#b6y֋}5d:'GMog"3tHwa]jOsa~ oGsy7/9qGzu+d9_ܠ߳Z`uq*&|_dy|qi`nkq?rӰm"?#|MX~Rsc}Q oi1 yd ,MoA:g0=QA@[Px4R8e:LrX=Dw P&EOyVp^+/4<E=;>tYFN53pvrFLNA{%$@/S!7Xց']SQgup;Y%@kYZjZutѠB\'utAh﬈mo2].>ӹ2*czjP*rVQ0Cs*LsLÅ{{.{xΰɋdf]boG 1vϫD)lPBhiD=I`–Ne&x03ØI$La*Rdƌex TgZj×_R? H Kng,(28ba,͆b2Db?a 'q["MJĘLgL&UBI!439 5<3Ea l I)or` f!_^~tzPpwq A`cSɞ.^YՏh^s<&ݮ"O~ŸXAs|n<#/LBm'͖Zz+O'l䠊} ଑=z?ki?ǔЬ1fYfj|>zdk Ary &?~}%1NGl/(b%2;|s3r!aǾs? pH,[}.zt;b$)È }oWl溔 T-#x3nk^oH1Wd7v?lTiˆڽEL |S=|vV/Z ch:!Uђ`h΍bMK?s O1#9Aʅi,(o\2n(:%\0B<`@1vBo]E߷I/P=tOmpOv\:>9mе49{nlJO+Mn/.¡{M[>6d{DmR9v^YEXRynـx1Ijsc"Lgf,"H*#! $R8j@I ^HRᔻR-DFJ=*8.?[Kx>,/7g}zA)8u~OΣpO'$rNx^{SmyQ6Z\A++ꄉ`1lҙ9Q4yݹL;'"*m`uFxkn*%;TmUQ&H3po'j*WJ *{[e^}J<]t*j*-D򊯊2ڎ "_hHcyIQwk!.N=m'vSzʯGI dY|=‰#x}i(@Sc}x]? Jc隙2S}gYM҇ĵeB#n>حΌjG֘Kׇ24blj L`TI$z}M\1YQ>rMHGxKZW7GJ[uNÏ*"OPh#{jqdi~ ՋY{B(҈kMFVEB S5JE#8:>8qXHʥh-?Xe: Ռ`<d1"$2,2FQ dIQMD?S?m_0H5F,-T\vKg!%/N-hGBwLK9佗۹\a8d###8";,(=o"f7M;֣ Bw_h0``ǒ@U{_(ƒ"ØEA8FP1O8]KTwQ︙boc~}R^5g08>JW=rG^{i[ӯm<[gh{PsdDžB y*?QK0WOaxnRoŮk}@|d00 bmdȈ dL0l1dr*KF84 o||SqF1! qDj b1V(' 4%*@J =[o0-7ʛ'K10\>Z  [ ]\%";z|Xsh[ɣʣ&/j{.~J 5m\fhœt/Aϩj!1;`0>9~ݢFӠ\Y/*ʘ5N+-|] {L߿ax4dh}^_GxKVb]j8 -4x 8-t6y>C{ !}ժ`8i.Ŵ)8V(MNNr)&C|9t\r8"ylSÔ~vV!kq.@Rs%E^ҹ_ xKԣ RIݧ͋U̫ɝ95X͌{{Pf7i!w ۲4 Lk>,:!t֎}Ѳs䲲wU Bx<^snPQuhȓ/8ǫq *u3!vh]/c{ONu,)/yDMsF%Os.7J. ].tH88I"85{%NT_juc)8άvfBV7˜ ^,_Xli&WC F(Pc{5 ArYV=@F]؀6tݕ&K5ިݮܺJC@ز9x`\;.,Ț>ȚzeIC\M^})h%Ϧc.uEEP1%I#0HC 6`B̃ga͝+yv / i;pTƩ*{]ber&qE븓D&ä!WCv4MPEAsTU]Z"쒪MF jxaa?n$N6/N0U1n6Q&ȑ1p1ioYuZI6MGOaE'x۟;n4xH5#bt^I鈣.!]. p'wZ|{=~z3`ĥb|Pt:_Y>X0p:b:nG9JrӚn?. (̃p:NA%yaqߝ%)Kt;#xVeS"uqr 5zѺܰ@tZI>(so:SœtKx಺C)^h H]ѭ3Jԭ" =rO7e#pݽh7e0r Q0R\5Fld4V~Q]Y{r\##@D1B!Lp,ŊTĈJP)P ~Osx7=)sէv]6+.iU@DKőr*e$ _d1E d VpJqh$ 2"8 KEI3JE5TP@pTb %%GTUBrgg`Fbx\d 9} nڑ*݋YPY/>ua$yw3~ut^^ 6hwo]_[|"䓎F'q0_ʪ~gr2MM$$ vD;M,ԃ,\x䑔IL;V)7B< 7#n\y 3wULf:me43QF}sGg:=az5 dT0&8Ĵ0D`eɚ)m={sKйiD'H"L(m7FGbbjڎ:!TPVROZ@p=A>d9@^TP8@h@8J!lGX) I&ta2+ +ð"&uц)~G~xYQ.ַ=GU"[W'q[ yUJuG1y!bWF oW.* F˩l@oq8Hce^mMJIpb$bq$Ԭh+|J૰Pg7q-퐅3=U P;% 9Q/~˅e򪢌W8[f(˰d Gj0}/F0eud %5UmJC۴n1fswP( Q.[mM;aڎ.fxƉ<^VJ ?o)9G BJ/IݝJ]tpKբ :Gέn;'<^qBWTjfCֵ4}~R;TmqT3 qϟK6 Lq@%9ZUNt3WM:[_QO. fB);Fit%$HeHZ=*q3`QS(SñN8]Tĩ: _NEVMqM=_K1˙&9SoBG7XI<1',$ɣI@+, dR(9W  8A$ c20$ (1! AD,$1 ;à ^Y4Gvw 92BIDgaB@2&X)ź$T H "#RМ(Gu4O3]XՖ93DW{ 2o gޭ^;V Z6բa:VT@qqnVd "KAQj$ń"(B LӀQ q Rq@ |Ữ~T DE6ݽˏ!l0"nd!f,F@(NH$F,*~`!!bG4 BIt8Y^NENc $ԣc>_lPY/>uLzN=z+0ONOG_Gڿ/> 䅺뻾 bEɧx}_ N8/;j.o*xn)%Xω1%o^g(k+}Gga<|&TM'2\9 bG /V-xzXHqƠkA 52$ro^j*W,XtN5z> 3Ԓ:G*U{h  5{ 8&>+w(6Lgbl^EKȥPbJ7/U%ۖK^Tt@>PbK/nv"tD!׏mN-&7d:bUAqg*rCFn H=rAPSDEPx)7u  ^(%Ұv"RN t$j4nD*VlȖ)4xD|\sd:+(xhidžc:iJS sN($]Kƥgx5j+(1d+*0dkQ+D0HJضGWgt1Z;ĉԺ[cI.ᡄ,mV\Kh!>2r%N=5yV)8 WTKHd- K: L$ł*^R~1 9%KO:՜anL@EA{㶑 &G_ žq}I0hh$J]俟jR!)^$ٴ 'Q﫮™Z.box ﷑VYxߖ(a\pɤ Q+b1ל]z7Cq.CfVE/K#GyR%1ǻw38.Q xVH0_=adm6}) F29mT ^u,y xt>ؗ4j%yE|G q?M_`b'$OxLtQL*8G4dVh P.AOEԣCttމd|9S%iT̕KfLj@\3+eޓt^@&;Ⳃ~EvZx(0`gOɶ&ֆ9jKH2!e$E?V QZUnA7a9P%z{uH^t$ G4?}p&B4Coa-}?WDQA$M*5Lƅ6RgsTv,=ݿx SՎJ]CjpNo4*@ ޺m䘯\zԌ|;Mh^x3ߙᱬNU )crXΜ{uDV]~^YcD;i2ͤ^("-TKo4~vWLuM^Y+bϐaN1D0RҰ(XE1*s.λ+]I jwjPG e"təuquPF3' Gb:b ӍТ(-c^:[!㎍Yo &X (Rt#RxdPD#h T!]ZB+,H;nN:E0R81wm#0#0$JdP͍0j4PY9~ 4h!XQ/" :m Lļ#P[zGnY | r{mXPN6yZE1 I[@|-@=kQ\DV@!ݪ>R,Pٹw!iq튗{C֚uh8DZtHC`(=%`;rgZaczB.YB"jZ{kL[~T}$4⒊vft h}94hLj9Pr^Fyѿ B'KXO4y$Rы٦ѓ._>< $ BBR.]O8$҂D!1i%!xq֔Tқ/PͿǐ"$T u}2L-Zqb0HD$R&[_XeVa( 7 BøRFZQwv 0MF zҴ:@770L_ mdV]2ߦA֟IQ91))0RR훉1E8WbZah#!8Fa@h'D3>S SFD0ґ #a@528HTZ_r; 5BEtkɎ`%a#(riv_5KFN@RKL)[˂ k-UV!TNc)+ql&VGe|~M{;!Y?텫tXJx.A7_@H Xsm> 62,&.l 1Dd  jO?X,5V ,A)&w?L2nmp~gt ;O8@ `/{/j)A5`<0qt[۔v唆@gnzndt0M|W!C(#h&[ W#fʗ46*{j$B\uHԉ#]e$‘xd3!4PHj0 s3FQ`H95tO/የ 8(%-_:瀲lHEmKVr9$]b:V")7пy~HA7??γyJ߹.1_-bs??74iǬFDdBR#GX+_GRk15X[ P#Kn|Lz=,D/+}U\-=k г@'gy_~*PdHʹOUi]. v 04my2c׳OySРyVh1SDjy֛*|oc0tq<sw!(ݳ5/sow y?0+z;Mp YO7~v7|H%p~` vרTE(Ě١29}3$؆:,"TE-.W`b̠ jBQ2K'% `4FTʹRЇ5`s59ܪvԧs uE-L F($ sXC/t-FRWT#&e;9+#R]" _uHj\`(f!3~Ԑ!/ AHB 6a*5;EVDqZ:96,BZScViA˿TZz&-~xh,<%b5/qkٝ =-V^3rlxc8]F ut랢Qh7%MfO%H775`{X:lC'k`fq?spwe˂wFN$;T2IW;dLТJI ,U=  `k]aWxkKp@KTtY}fkm/'T lF)-O]#Ќv6q5B[\Wڑ+D_aݲ2[5L6H'uLfK ߼=\G?[x7czR*6Hi:?q.LQ.G?CNH0>LVOiq ҆ Em phx<-knQiQ.fV.'ϓRFT_%BGr\Tl۽P}%K(2.CmKLΈvw+Цwkc^w֝~6Gփ\ i[Ff&f-&%蝀V)LP֞\Uxu|]sm :K{˼/]c—hȂ4Y &k܏gN ̃V3t7ڲ7=$ftz UHڐ]mVqRMm\;$![X=~ NY)ִ U[yM De펋{ԏ:dTܣP#6C6˹4]"jj7/|L.\'2nnz/5%\,E `ֳ^\0VE6Cy%ͯ]|_stt/O?) YV R(sg65zJ]nl. -Z^uZo6],ys efIu6.AFK nTrס7pfɏ.qz W!Jq{yMRcMk^Ym4>8Ce! UNic^dݴb[] ʨNwTnrzJά[%[:2+T&,?JW]NNqھ{9\Y?%w:+թӲKrɒRHFjC4"}@0_=f2R3%<Wla亟%E,}$,3\cb{Gnf?M_`[NB8-{0B3Dz2Ҙr l9;:_GWMM/R5e '*o[wj9+W=3=w<ǧ. YӳDNhR~|3 kޙ5Eew&rs ]=DE+C Ƭg @Ep&P)# KKsV&sDV3uYZu@l,:_8F8?%Dxĵ^&-]䓟/z"bpqJx"}Ƌe)a\89+Z%hM2ޤo@Zk,4e#*w%ai5BS[BqAiJ QI\C{IDЁJVwA c h rvF yQ奦Sfݪ SH I^vӎq8pv27#v7^{!Pq5u֠ Ll&daq~0 5`>1Y/v;r"H(ܢ'u&]^י(p^ݫ*-Q-.p*_p^t,yþ_Y<ż~ݰWʻT8zNV68,[A J21%*/0^ArI|QF3#CB~<£*IjB1mtHH(CS_FHH?{Hn$G2a13 b;PEy!dUe\<;6z! 0<9ġjPG `@ުU9@T*Hhsu{'koc>S?Xv3U_zEHoh3[%ME DBl"mxz5l&qcu+?R\М٪<'V"y/BB8^őW|*q fJɐjn"'UvXŋ(u\{ tսm8]˃-0%銹WybYlb.wFM%}w~ yPR,$,K*sC\Oc fRLQ'oD<u>e퍈WL7Os] (!e2N(,7)q@L* QP^Ҫ zϬVYi*r|pR&bi* (V,s T(7(E0S$UT"@jIc}Q=@SAX!YYLQeJiT-Tid rT)I* $XUp}ړ$;ATbTxJTt*~kJU4Dߣ T EuBc d-x֭ y*S]ݬﮯ!3 .&fy'~{E U_VKǕ1N4_O@&jY /f\/ykg"U5%%DM%V,*YgSQIRA^7oE덕\f 6\ ۗY@ƈQᇡXc>=ұ/ld$B0kzޔmu S8 `ڼsc=/UP!0s*}`+9H4+(ˢ e Ч.E߿x&fjw jm>7/A5pF1>+ј; =.#bF OGqhΩ@3To^D%g6(Ā+ H1.%'ULQ(fBdEVVgFN ȾYRшЃӌ D0)+RbkyK̪*y 6x=A&j=n\@vzz:0@>P.Da9 KqHI#>>0 EFC&$`k Ӑ,a-<'kx G"I33D ®ޒC&178uN0A D&9Z]C)9w \f1:(AJnje~w ҋ6>I#nNFʼnNIQ!a(4/B!HGOQooCilyWOmC P"  ~/l( #:-y("sDhV ;Z&DvܗQ2O \'8z2/s֛Xtɲ"*MI78)y9Ye#thViWץ)&ttČ[/ bI^\ؐ4! fz$;rKʹK[OK"ĜAJS "D„J`+']5XJWǹt$r(KmOs0g퍽.&ǻÝCC=,KOv[Kb >Yqg氛PQg_h'>Hc@bkc^QYֈ!h dTcTEc3 `Y~ jp$0CLp._[F5\m/Iu]ݨM庀AegPJ &`Q0RB`XHHPVXfxArשo@ 8WMhm3ש6_nv}aBKY{APP ɂK"3JIV(*Šb\d"p!R\HsRȢd @ 㬂PhcYVff ZQ+!47M7S~(.t=EYo۫'fS_Ycǟ?"AĈx'ﯬ VW|xPq{4w=&_Jڇzg.ҿL<[ѮYnqdYzAZRNE+57T &_c×J}4 э@rV{g&tѿ &&Zl@Wռӏ:tT=dB/YvCGO:qSp4OB*c;M-u;>5u+(C A$F6=f pv$5| 4gD"vk{{o>HHUY$E=t#nz|ȮJjABLE G {\:I 9vm3$86=(s|U۽}/Ib3׉tbPpy(1R_TH7;y!0g)clfOиwqOTI է#4z/F946]ڎ?mim&|g?4 4=?]#j%6qT:eQAsbN8Lօ=9KU$zj 4@i6)].&&f^=Io<9h`Q!AHdnEtp M"Q#Id@ Ii)QG0ѿK'.)URF40(%*lt\OC9ҟ`H:h:)m2+~ѿ ӾZًG{ȼ~`̿{2ybͤ=.U !۝۔?꽏>֓lSu8;-vE_#VABF:Xˇ'LIt+R(ZǠpnw5Gǀul*+-?gjg~R=,&ܜݧ^]_V@FЂB/l!2K dB۫+P v_F"vau^]h;& ibE|7xr0uX^]]X>|~ kSp:%%%Z3:J)3*D©<:wՎP Hr~׉V'DĊsS^wul: r{&.\OGCNɑF'@xQYh<|Hme?iP=.S=";c^jZf @<[`,IS;'_G;>wc0;;;E҃3?M@bGgT-;_>?QJfFX`LkN[Lecʙ PL&ƘS x0$ήL>Iu 띠7nmxX~ Kwǘ8 qJ$ܠ7KwtbFUDŽcC in(chD $N1}tQP=.c e]20GPg"Ns 4xS!ɨ_S!!@?o> G3\.7ZMV`1=XmtU7`R\e dpS{{T.ͯޔs;jo}>A\p>5g<\ _6<+@JT%")JA9VP` Y@,J#^.t \(N!Źr2 qk_v~ $-mwMeĨcgmA/-ǒ[2y}u8y0~ĐDa%)䕪3*ZWG%Ђm#8%Af! Ƭ(mF uD($ "chqTB 1xP*⽒+Ǽzfx,Qia2km# W onD$dlE7 #-t/y<@f6h}/Fx‡tjt<0acI~L2t@V<, $=QRg?tcC%HHtxADg+۽؉dC0Ti>>Oϻ'*of&g+3-'Q<<2ޏMKIJͭHMyͨc?__]AG͜S\=j5? GM8xHٙobTЖ[-ht<d*w޴Dy]of<g!{NB %ѐ(*AJcr @Q{V5jgUs`1) T29 Uq!?>$# `D#$x׶.}.d0;#s}ѵwD]=¦mF'GyIMBWfIŨ"F`Uk D(T >c~$ B|DA">X`J%A0U>HP `9ãl_X+ݹEWg4v^%G}ƂF%h#L("CYs_%DDjÈ3 Hs}Ro, Fg4ը`#{# "ZBA"0V@4DSki [4.0IDK@#*1%&?Yp<<Myr{8:Hƺ0$e2Fd7Bق{ ~ :3eD.%h/'+\4klY)S6D4I7h]xN0uTv[6&cn$VӜA KFKj;+= =IlPyftўyͤIe), 0YƋ/.E3?ί贡v tdI1G}/(8][;5bKhAۃq!4yGR=ٌ9܂f7ϳELA*><5Gbs0;azOx//}W5]+]٪òd.Y%Fʨ04zI.kCwDig(i]Ñ+0W/sFL!> >93 x3%aa#@༫ ygzEY } bĽ\Ky W.0YiuRsQMjR &$:ZB^)@pSړե&Kx{(QYrMъ'o;Jb1,4w:-u{ٱ8xWU뼮k`1thH% z|d\ej8%C>}?U'c}Ixmk3%fKNw(՞hS "ZM򕫨Neju>ndF+ETe[wޗbٚusH+hА\Et*X7W ;X.gfn[heNMkZ,R"Oȭj 6>*[$8{&UrD閍" rhaffwflK6-0%qKXݧç#I5XrzBN5G_RH}hiTQ5P7gwzQl:CY.qΚ퉕uV _[eA.ֆ8DRpQ Xs+Q{>> ٜiA`+Pe5ǘB2WT\u !X2wɐ2? "Ցe[ 4Ah9uUxܹ.IR |,mq#RXʞRJzJ 8wC?KJOW: 8_C;.vr S8 ̺n>jTgbiwqhj7uӰ9m}G7rsF9 ]|j#Ǒ/p~'G9 =p϶o}Q{sBkM7@9>\:íyܼSϠs"#5 _~Z#4+z 8;o @Yn[n9Hΐj (2h<+</ic)9FВ9 ZJm\G"hɤTF"%mJ\ :ad`zc68t`7!$;ӖurQEgDdH $S?_|ɲbʎ}75kbG3l vss~|VY6pMu_c&~)ɧ%sZv۵샫i":/Ү/uީ *MKb Ed WS7qb-n=u\[X0S/?rBhv_b}|jn6N* @"CPJ21!8sQp\3x4S-`tL)ޙo6|vYJ6-Ar@qcyr!6iUKW魐Eh|Io>IH[a˶[|-o!bYw"t)̷BVa&gߧų 1vo6ٶXtAqkx\ΕܪF^b:] GyĕNY.NQ@D{|%f>_a4%[3] 7D%[G%[%Pt:3lKx23g}K,:]4ڑ.תҨDpBYC\"ߎz%]G ՗`i ,I*`899N^oU eqd67.X.8b5sq sō7%Ybv0v.AgWf 0s 6WfDFD-+2Bnz]rꥉbMՙ4pͬliچ_H{eA!3ղ0 豭}vFEd“C:(臈(d*0U)B|"@8H80Ofr?'y"(y2;WϿ]PR5!e<?TG+@c)5D"RJD Զ9ЂjDaa@bP H F|ʆ> HA*!RPC)4 b}MwP| 9f0l2@mEYޖ=-Z5wL 2뫽p ա5%~1mf s2yĻNelO f h|/F/-&?'{T-ֶ=5l?f3s0G6 cBni Dƍ *~Ǽa$u\pPb+3^L|mA80N|Owʻ'$!93ďƍz0C{5)T=ȩX;p~}ulav~H9~.8b3We\L؍0=bmR]}P+Ֆ#Ϡ|_ܫ,W`d^7Xc/ey%b"FWBFyW{g: ^BfNc}%zQ̥t]g֐oA0TJeդvoWa-So&x({2-Z^Y^kb1,Rwɑ^`In(MjȬ|%\/ O+MNaYxQ !rMm,] I/՞T Tѐ\E]t g&}֍gnwh\1(:(cݺg5/hА\E+]#g*scװؒsj)َ^yjn/!f1{#Kmk|wGF2sڴ1mꥎsS.\Zlh}VjOz$Kna8Jj^$KpȞ ۫Zk'`m8cNpV}Zg]o%9B9=/p{!rǽ3gHZ f5 iˮxhRm!QC7b;LO^qsQ7mmۦMi/fhߓ@0›m72dJH0ç$YMIL V#$\(ɸ'77HfV|ӱICKf+Za91I?{ܶ J/]@sJ97/e'a 0ĄyP?3)7q@H-RALO==ݍU0O_x`Jె#LS՟'/<6⽱zj2;1j >廻"~CC;Z$3JN!MiA͎S}wi&~1]f`c)ȍcnPR+=ڎk8b ̧oMo9Q!m +? #@D@C^ک2I5kY ;eg;~ށ1^lsPA/P8=R~R8^}mӜ\/ޞuyq/p+'ſ7S߂, V8rKI:$#ČM`H1 +ҖDc(X#K"p, Tf$_ Db.MRR1OmU'WFwN\xwScfm?:Θڞ"tE3IK%! r L"`@# t$ ;S2lb6/l,lE3kއqKwxFt֚n !wWoqɏc&gav{b>@~yw`~͕хy-LEw?LT;%>.} aΠ@=ڦex@^yUݿw~銋I77Q"D@7v֞ Ye*ft #8a0ʭxPi.b,Eq;~UX>*grٲlMπhٲ9?Kc{xfn=āΑ|I,aLY2X 3h@wwϒemYv8}\:CZo0]ӏ>jek+ek+ڡ०Vl8Ԫ*PoިJ#,aTo;.??չ5 0owӝ|JÈ#P5_:#5tRvp(jp!-*{RnB$h,tQk6Ì3}'>zx*sjQ"G9n9*=WOQe6d/ߚT %"MT\T&=]Y[9qY2~/:*J=Z""Q6 cRiV%(e?!^TʗtﺼXSH;@J?e۷e_th"'_td&FŨZU@kJ?ӧS&gnc T1}{UN ϓI-ᶵZPHӧ4=>5Q: qGrB!? ;#j!@/p4; )mȟοAf L>31TIQI9EladGK'3~@j-lu# # /17 m@ߝՁ)O'31|}ʥ8nd@~#"-U~PetFhPDa X]wdj˨/֖l_(o5ܽn'V/<#P4c 2!mC4tT$ <8 F-Y#X!qYgC8?oQ.k VډB/?YdrfYT[vxg媐Wƶ ҉~Y<ß莬莬莬ʢ|C4O}`"%@F k`D!Aw WǥA?{ok~d Ih)^5>$'8Ye=BcOxߘ6"2V8QI6o}X~v2qB@IW R_uQߌSJ[ҁJZCۿYs%\3e:i*.IdKK)<#p?:<U()5f Dm,H#Mtk 1`!y^[+1Y1Y1Y1Ÿ|CJX`$ `lI$T(FI(UΈVBưDFHaqCb?r4. ,gtXvG(f̠Y۰0 |+K8nSCŘI:-bB fGHA˗.ٻԸc@(20B*M a"e$vbmPFD@ y'3@i%4Q LerT*lޫ 0mI%/.znhxIHLDcg "`($ĶT8ka,lG YE _sw;a"v KNg4L0,,(i~PѽQe_aϢ÷)ѸLcz?N2ɕxߏ[O='Ho/7]dEv{]M}n!1}}IvOˢDxa$]v+?Mpa,]8^!^oPOZ1`GY4{ %}J_G02$A$s/~ϛ~ D hp4X˜C=qOv.~*Kj}& BD]::lpss@N p~P1νsHU:Y p=7:)4KyZjΩ-Vf\}7,9]bO}բ8}/Dϧ6!)2f͟sH =-@l_z ҇R< 0X&Oh}*AQcqReC $GKK5a%XO ׊b)%=]^ZxsW+|pP>Z,AK{,Nhn1>Vun9X&1,"Fuqj{W_=݁xjk jWOupNkc#q6%fy",Bt47taD(`2lzU܆ ԉp ٽ `b;p7ww p x08!I1?kBF~oD[tS:<ߕЎЎЎЎB[@PDѰ jtK^(C;CbĆJ9K1ٻfD@ [k?uHlb9Pl+|%)]<2E=~M7rjIl{5NMqڥO=!^bI*PiEl'P4mj[/DpN?]d%+xŋ/ @olRflpk%j7bt)"'/\(6u\QQ90eD`U3lhE҆KMhl arh_S>K^;ءaR & ZUaQbmUf/򝐎r‡jrp(I@rV~u|S:h~]JD8[~ϡY}勄4+ י *`? =\gJWLWH]QU.)zx|肦?mI_S4۫caY02x;7 _$QؽfIqus?GQҿE^ӍXTVJzK/zW-WZ餔E- )z1G|e}3\qG^Ln4K)\{K` v xe͆mcpAg巀i~C̖NQ:ND$/N{~&mli>wF?w^f*Oٗ|ÔY}m<oAPgC\6$L⺁.x`a{ޛ^?d^/îg1>P(1)@0bP<%IIL$LpERX^M+Րي\k&C޸3DBdӥφ> Ȗ (ReBz 5XiXlW*:s/A)E )!F kC@#v48ًdt;yu0C$NlCSE6QyD9,0+& BE I#+> 4T}ڌ׀ : 改Hl+BW=cxI=5bqșuWWuJjS.-'LjM{"Lj0۴JH(D$"R(R1,BŴD w{M&T(+Ig=s7u{3͜}C?cK 9该ٓSnOP0damh9:9Zf_ GR LRf%3*;5dܮ>=Ermy$ "0bC f݈*bB,tV=Sa^Ji%qV,h@wTƏ2HS QP+(TmQQʏQi94j8@p*.gCp9\JYC4'Xn.I%ㇶ T|UG `jFvnj3;鑝5u ƙ|xL!-]]N*RIrN1CȺF_ކrC8YRA`,`~<}x.b~,z4-oOdI* IP@3W@; HV^I#f܃ۥ OQȔ F ''V ]?9yMf9@8];;kxK,yK\1uuγ `<|qQ{ug O DpE[p=!T8i3{cnwvIoa1'4WϕH="C/="{n͛|o$%^C|;nbou?j7qtꏎ=r4Ѳ+boEpݕ#O!:Jሊ0.U9|?۴\qa.=jOټMW L6h87|h4q$t9\fNjtxGyfT ڤ Ԝ>!<646!'U`l1;Ƽ60$E:rECNhcSHETjL͵yyupkXCބFg?`WYk圱|Tw LwC3^LʩrnrGŇ bisșW¬nNTT9< VQ(wvL,H'?eISxԦdhjөN{K\3lR}'k1lUx7&7KjH/1kQvƦÝ.V9zkp{ۿO4TH`ƅr,1{DWWeF ߣ=^yZaǓ(y3]==}QH`xDT$zD"AOH̟n$ h2F5=_[^l"Źeh3l""СQ]4EhyT1gK:}hȓ^ vA&{Yn Mcl]Hm QU"bN9 䢡V#ў0TK"RX&S@-Iuf.`56@8D,Jݫ*D|VX>H0;Cl!P"r^}*`@UȭmeMڀyU߽V^ĺxjӤ*kͯL@(]U[y0(@>{։\ +mixo,IE0/Rq+ټ+WJ8#Š "V+Q(+qrVXچEx> 2c nT^ZF.^L`F*7f&Y֍ 6KޣL,ݎ:%&hmǶ՘Ă6o޵;MvqCCXC~5_áO=l6s qTz;Dװ$\s"4ITWwtz'Ȑa`|LCa+.=~}@h_+b X$]38T$  mo๽r4zd%eKwMlSRB]@L!/*O`gQ@>9=S5J;=SztmT` 3Y3eM!i Sꞧ0X:ЂoFwpcX[@m]wGɖ9~<yVozn dُ5sC1ynUMF{hMFq|Iߊl@PLULW' F{+,1GaN^V9wgLL[W4Ս1ZqK6*Id D"p9Y:}sצ?Id{맕/}͕j 6[,IH(ɩB\J]x*#]#_4sf/0Bk25e:[|\Do3=tbG}|[4J,*jJ$> B)aT2 G:{B$C(ZrMtIX]Juw}Ppɽ pA(yox3F!䭉b@L`]pE>}ށ_z}#B8_Օ$sVq作:%enwf=_o67*wh}p599(%W"NƶٺOn-&qЫPBIhd ac N%@a xLh'ٹ ٙ:iɃꅲ$ B@XYlkǟUX1by5/k7W^ځUbw@;>+|aV,i&< U)Wee髗}#DrIPĨZܶJZ 5-݉kA6/Lttb#+q6/WAd_t-&ab"..=E{Vy6Ƹ0V_ ᲗPSY BK*BTGԺkiHىjS2N8"A>FS7pf" pB pOC$<Ҍ@Is(C̩^[$D*1?Ɂ}* 5gCRfWĘLRyi;Bpguцb ~֒=uR ,zj>A+h$S!X?e``+ͤmǘ-ԈgUjk{@ޓ8p=3;fwP龜 ;p("ڃ1#< %<@D\rs"ߘґ7'l[ЈµHPq'l0Čs'HUմ#-hp r72>HM`5eFYwfg0 ʉI ˥68OvMd C!*@R%/udz)BKKX5#C5Œ?a8{T A!'(tvh@Go}b֔k|&ͷtY)a﹫,U.~G4فjq\J.iòXܵj_pj6^3̂2pE5Ua|0bՑ'Z)ak 8#z\TtZ^[8-=6mnF*[7yanoµRt[F4ւ0%Yy!Ѷp)flaƦ?O:5d7HADfBM$D6XǨLEaDmdN0w^%L!k4TeR1UT@̖lt)jel1_VxjMGͨ$d)xȯmWޮqjj)K. n;&΄&GU" #sv7( e?-39Ŗ %V^xݷդ݂Lc0x\l` L tߎVTPb7R8Eea<$@BIۼ3uQͫuO{}JkR췛> (G!0 <>~F"aeo9SKMYk3'MRC٪W>[Ko7^\ݹJI?$BwKdV]#W&֚TajΡVtk)+bfj"D9ڳZy* b=7\[υ7nm=26沌E"di+Eiasw_aM|aH5{ `yo.,E{ pyt)%*>GzzV1 ]!/FY" ##[@?"UO.Q.< EƙkUM)ќaQvdsK͛3h-VjgLxN5;X*.̝Ӳ@)'4'nh{wv/Ihn<:}hVmz}r޵6v#bmHKX ৤w,H&5m;Yߢ$;yt$Y(L:|YǺy|f6??[g7Lև;;^)7Wqy=}z{z<σǻ߽ʎ-&e׏?>KW1ޝc2sqC͝6-p8!d5[OJQ‡&f{0f0e˫"4gTzI:Cm7nCC%srAQPf2~9|t?``bej.h~AcdlS(dWcM  Gcutv[&RmǙ,&=@IH.H L K x@AQk $L(BN~gj55uWgnpR fNe5M~|gq=n\*3AorL&a ?py{`,+|Eqbe0乙!y8DE oETA6YԠy%CN3(^bSQ"}qm%#}e%F_[l'Ad )'Ǝ]d6Z,u egU 9Q/[\gNMו#`+ԌP<]#˿>ըPʞZݴ~/w??\ȱfo?x!7y?> wS6;S >wC ЙBcArЂ:vnUsDʚmGaҲd3.bE'1rbH/ f&Á /t5bͮl'uAtnuGҒ;d[bd쁟b%e{6-ƀaդ dI6Of vtyᣃ;<np('aGKtZ,0N9!,,R+99U]${l*}_ԋ}_>#ڈnHZN'Q -y0t txKAd á/77`CwvC8ۗ}KJYed去2zb5#t5@$)]ULJNu7HUN#"gm:1˰{ڔFUV 'XZDc5knZUHw3%Gu=UJ ZTijR:1֚XI`) *@JY.Τ 7[v̒UђǎJvpW>]<%7Aͤ`q}͍Fwִ:xMxrvKEzH\~-;Z9i_y{yܼx?vN26qܼi;7KN-P0`zPW(+B\ܹkwڬ:h5WKS&5kI$vAd_77bوi+Vv6T-[}yMή"(oV\wn킴Y1j3g}EGURvʲ`f ^sI-R|06t5O%طA*#;'vKwix a.:M.:}]wxFl|%^2짛e죏o̬hѧÆ=oy̍n8cxTbN_1YC=yםS{Anuܞ}AK֩T,~|Vv:~[{ 3ܵ%6,rAJ9 nV'n^,*er4km4ȶ7/Ͽ|t闫'k=l_'fL60 Wɻ?dۅj|j7kCaluੰ..ƆO9 B޾9Q//A2K ~cs oW m*ka^CzE* fDD5uj<]ۛTƓ|x? 1epW;%4P DN`N%j@v*I18ۑ HZ֕`&fU Ҡ.E{R _{dDOVl$p6^n/>揓܌z S?/7yjTBvVEKy;66TR)"$c[2fJHK#KKpYlgU: ίbO:6[0"^'h(V&)C\m 9j-{l=[6Ya B0{V֎c,kq Cmr/hX/:;`mM{( G6\34s/9kbϞPt~ {l TJ* a++[VVsL&WE0'ZsTwE0[sJMjwЪ~VbU鸠f) 15\m͟b8QXf<@6S/.?W }YfϹA0SjwՁ`RN *r3ֺJ @:?CS75~HǺשގ }&:[vd 0t ZiFݵ9û8Ǹo”ͼ2ic}5O#[ԇ:[,da tns 4 =ӂOݍ_D՛f" /x0~Qn1ޫ^‡0~zK?`oQ~@5/M1QoViۿH`~Hr1Bgf Ƒ ,jsȆK?qr#4<֯c_)!?Yc[z,Xiv&{~T\Љ=*{SPAD _=%(ϜE6|.;eΪQ0e[9y"tZQS2!8SA`=K_Z 4 #X4: s9Œ> UEcp+#{B5ѨW1y=[HQ>y I`NCh CJ`-!~PiAC4,"OLR~O3O`76ݲ - ҁ|KnEL|[} (-5ơK!M'$O3ĸY7Θ,4z!)R-:̮[~Q3GDnDƵxn;JQ]!$T3 P H7x0d3$L)KV7DbRgDb_Z@h&tshl 0*HKiYzQq \!!/qtZ.qvqPGMШqh> Eq(r xD,As'5GĈ5 PXs6j)wc>kύXŸ:4QRRA\/*n靬YRO ˤ֭ \?Eh_)=N78% ͘Zi5@\Ʃs"׿ߓytjy͍RԩW`u&!_:ZT >]J: !#Y b1H*cvvЉfQzy5dD;Z=\$~AOlgTґD|.rEYؿ8.--|Y_\/e)] $, R _a2} "PRǜi{}ҤBAw{S\}r 2Ǻ= 6RQ"Gp[b<ӆ vvvv&Si0TXq XhGIΌA2A#/4T[VtɍXGf~/Zgz5ue@A+ۉNGj`xPi) <](%ҩȂz"&X)+DHv%#,V#w5អ H/kr?,W唑8oeoqN>0 ъ2zf=[fLZ= cƑɣ{0WHRJ~v@i\Jr$z1Ҡ/}¹K g:S{5Gܚ( cn/u W8nڵ Dj+#X`U<&E2hg -Ro0wVi}aTo15^Zkzk!]T.[: 1p1^^ J [/xŌF2%˜ ?1QCASA~2&t6*V5g+}}(^SK`vzf,y6=ksZ+KѴ>[5,%ծ>oW(V.W:bc]f܃ǰ6_T EVT(搆^4ü2lU7_XarR+0X$Bbs .W=wvK7ڒI2 Q0/ LǓ~˒ |UZ["K!: fqX<]f[[擅$_EB3HDS9p|/00GȥZrܤݩ?&[:?# 1' f!=m 1 1ePa,V(ǽ qQ|Z,3S|q[=լL)D-2 /9b R3Qi*9UrH4&/CϬ @ c/4㔄Fqj} !Xy&EE;Zޢϭ\Do1xݗ]AWqQHyq &Vps$܊xD[aI!Rzs >*kU`3D"ϧ$"Hegb)E"qIbxi}eLГ)!F}A)Tܺtو\h6P)><0i @8 sq?]spcM0X5h`-%p\ON Ύwgbz" B0q87;+`"pg;sm6]wh?kf)/_;__V•ok*Aԯɧ%%JHj,\ x{MVs ˄s;m̱c"QZ_nQu9bh6C%,ug?ֿߘwÓߟپ}(Y(L#D_墦YY%8[9`2Vi u@GDپZg XK2Q iS>Vh+ZP\'jJ*.|yyď1a,LY݇\ӧ-R=gQBpjK;R\]rp?3Z4}Hjv='sX6g{Jn[ uy`;&(_{vw\aFOw]T4CkNe`7~y70kW8p%` J6å_9noVluj Jgqs>>׷ | /ϻ?$Gۇo#`05hVFNz3O%r;DyO7s*FW^˚?hmq(OvW*a VRT׾HD+rݪWBcmmBnsz9e=eST OxR9w+&E~2# u^and]%O}zQMt_`>t4?!<3WLqá fktb O9[A24Fa5 ,s}WhAF j"U?L ·b(;I3v87j[ Cozs;͋99s݋"}+>) FzQܭ&٭xoE]e[g~NߘaPL9>_Db >axcF~&|m϶qY7 e Ddfڷmul*6ibV۠W N5/ 8uկ`wUΪ۽39HԲ~踠һSkBo[9Ջ9n h ]%oN>ㇺmm'CO}_%zxBo*5ސa~DIvnIɯZD[fW"54j?i~ƓNY&h[gۧǴ oJ [7K{v}1dRV,+WrP׭CxA qv=lItàY^#{oSZ~\LrlɞegZkd<82%盉t;kŋsvpyǶίVVGm>JBUFڑ\ڇ'lUVm+8:)_JEnz$D;<ƽ~'5Lz|&=7׭N9:yܿje/[6Q̟t+5NrIk?ޖQ׬nkwx,{W%PΊ'6IwخIw4oOc+3lUSz6Rg7;{w_Ϳexq*k+B^gҨ~}tBE牺4t胟_qۿb^lQ9|? NENiX) 2IbvA=dG~%%N&-2M!#5 *?Wα[ Yf|dpN̞^wmm??Pl׷i>aŤ\G9^|î}|:|4\=h V~~^`.\}"p#!j[dcY<=]"[0UKxxS? Cˏ^3o?٫/~+0 xߟ*jh?}8P=6 ~Qg[y\V炥-`}߼|;d|K \~|5L no t< #D(W/ϋJVd:s>C=>w#RMċ cn1_/׆ " nwY8.-q.-},h/SM G4ezx=:/< \#AQG%9)Sʩ dЈ !qΘĞ+-'8k-DW+" _c߽ʲ deqQ<{MS0ĚDK>#i AO@x JGBЌ | f)A- 7UMSJ2EQ yP;A(\z.vCR7fէy*,:vNv*J&2\qA M7jH~McvAdU%fhyА΂+Tl!.'~ m2 ;# u/P/j ބrt?"SGZ [}5xi 1: ]9oeH4V@jq"tmaЍύPIe'Bcc~h['gvX`F992vq{g*& -;=*q[%_(,ncᆡ{H8:^ç^yߏi~ ݟNs>]'6Yl={>rđ1q~)AiL_JtJۓ^tI& ] ܹSRVI;Hұ'2iu 괰Ec!Gّ]H4X)xGĈzHZJGBmƗGX"DqQ ,`P"1aDf|3Ȭ@X )iŒʘO S馩δBϝnFҙ;S"S2R~.dS-M0cYRfl2Z0O]S }"? '~3E$yg2:Čg|5qȄgV$ PM[O~<¢/$&}՛] ( 45FLٖ)]{/#}do~\h9A9QR$͸G*Yys^cB? os'懳uʜɘ@pgH$ ;J)HRU23C`sO+T#ύ,03ˬkp*g<^_}6݀c?ʷl,.8BC/'x=yHv6‘l6>;=DϔԪ7Lte;So&mXz+7N'BaΜWoAHIթtMLk*Pk-EHl=5W~2+VXPYF`" :CAK1!4gn1>gUkd8N,c7Y&Xfj&#e+*#@T-K$ BHԄYl"U` "SfuB 22Rd8eFaLf%754I`%Rꓻ@dH'@)~ɟp*`H#އR 9 hd(E}FWa t3p$S`=)<ae %K.T ԖDvr%1S[V-;?_W$\\ބ9ݧI6Ur/XWZB7 ]XdCVr0f;F^5!k}|XB__oMyX<60G6SPZ̨;1cp_ vL++l z^m`!ݪG+mkr ʱjGK"J[ʷuqP+F6pY}=:05PHn89u$Pp/$cL& ;CJvVF3o-X >d}p"oN{S;A2[$Bi7lIu#%})axR%@N=/Wwg;X{-߃+*RID/B XWCy1Io'o"sTEN*}XÈӔ9΅HfRaic!fEHﷀM uV ukҹozHC -(W4 _[_yiB`|E:h4>AZIrQ-Be.Z aL91Il~ F%rbAׂ<ά+#41^NC)vW#2(iZ\#cd^e}OHZ!1ҦifI g,H2 ,jM3h31g'޽5 wksHa=D[0ّ5: ESͮؾ"A5nvT/D `c_0DS*M2>Nk,ֺ/_ˌA/mnVT'Q,G(Q_G1gñdՠٱ]r}: 8!ֳŠbx.4P<+/S+iN/,/Zi5 ɉPF1>V Iph9999JdeRpIWc()acZ'POPBH)[9kp|-PFj\>o] bR$Tz͍ʀf? fc^ rto~ 뗯SSc\|;Dp^y1W<@v=^ G-ZR{\ۚݗ H^PN &%'Pic}|$I:#}D q(}I9P3E`hXt*~Ғ'ա';KTWWEYTB^]H^hXOHcGMHMM.DJQD„IBPN0De>RIXݲXd- DsTg1[͞z$ ?vu IfyX5Y"`d^*9o'72JXu=T%mPs,lKi]CƀbS8Lȥ̂[-,6g'V! I"&N{eT7XP>Qu!=„])5\\x l‰= 9u=`Pv©XOgGF%rKPU)+]s=JɻaR(LW"pK3Ҏ }fOcxG~ۋԂc\cr{3k.S4 *z|nz|^3$pԌI-NC1\W+ 0\@s3եZM5,z@Irm{\bF$ײ31/I+ꌴJ=?B2]7U>vQ۩vQ{}w?{@Xwͭ6Vmw.n>..G9CJ[Q_d;WhkEFF?1GHJs[mj\3Z843S"DػA(#CL6;O"' LS>]. ~¤ZVsB;r麻ԍӕwZVkI8 R/~x#VRѸbP38G/V5ڷ e&^y"+ی,dr~2"=-`:~;;?َn\>Ŀߙ6o \|m=A;:؟OG`MgGUrnM \XW[ EÃWqݩ2f/b|/MOkv敤˵Aq5͙&}qMPh2~΅ˁpZ3u.pZ{EPyJO$Ӽ[?I8Ki #lvxG=9V;?Y%Xa6mKw$"*gb˪kQ)G:Kd.d(&,фrhkDJSGś"v̕,-|,*!e 0oCQ{-[g=JC Rr''g̨čBƒ.pxa+ZEp6,F$l:2%U c[ܶ`{-e@[^ccjk eT_R-<]1H;\ܣL~,ƚЮ/_v/_!߹v)h_:TOȺn]1HQƺ=_teqo֭{Dև|*IGn?uAt69VͺuoɛhА\EWtJ&'6)]VҬ* BJ\ͫ):vG [^m "JW4ީ!_F{ qt%!AҨD\ɝ6Ku]'5n݄%dĩ pȭZ?Rv:$WkoԞLԗÈUbI5wiRyPY 2YPFzn|6bJ(#oG.x ֛E>Z2@pv4Ç.2Cdp VbʱtKŲ,,Eʸvi2pk*Yܽ)`Fa1}VoZnov^ mӴ?){]U񰒣b;,ٗ9jq,Kуޞ=$ν.?Yj 5} BKy\_;(tA-OPini x ,BU1pLv8CA,2\PgөXyy 1lzZ(BK7Q}J5ڕN'-=6-T1IK%Zah&/VF'[zZ֓7>kɬ8 UÓ-=r-UZrTKZM|)-,LoRb6 G'G[\_k,(wfuw*g?,QP0!#f!ȟ+UNz\}sqCcd E7ʮލ*YɊg@eEe0cP"SPr>/G$N>9 ֲ h\0xN˭jN-JՔ͙BI4rZo` ?d T %h:m Ē4'+Yi,&x/~Z3,ZaOjg]J5Sļ+ 9OWҔ4LI33Gs=ӂ*^XΟ+R, 럂{?|!sz2XXy9Yڌ~ݿ|PkV/(J:>0lO;U C{GlI2QKe}[CMbLwnއCy6Ppr TƲ!A`|H'i:in/ʫd}gt ĕ>-Ȑ[̠DR3h~1\u4vg\D'R;N6-0gY) *c~ |~|3`c42Jo/c0 Ped),2 x$Jy_FcӴ+,4\:۪aMP,X2r-YևHcy, Ʋ6.{>?<͟TSyυNRECߋ%euUaQd .0)oQޖ32")MCptUg\Ea-t%BYxy,~D b%>oȁrN32Ҝ4"0}҈؈v҈\/z\ȀXzR@@ԓ{м(*Jlj=mB:/eY%\%P!:fLI 3WjNWdz@$' 53gp<[I?M*͹0;"ao ppl$&u9)Ÿoպs|\unbjGi#! tT)==n-M҃m7Ah0iZZS=u)9n- iAh)4-6RʉD-U,njK'J/M;m@]oԖ hu&{ĔLA L*ՊFZ 睹U 5|d)6|5+xDzb}(jM{+}A`A97[<:)bY!D[ 92M>E_c[jr;.dlNi'aL㟓q,)K\͝d|9[qrdt{V5翢بA[6F{36Dá3M嗓pu}%+/)}5oOMϯ^$?à3}]X l?~}=& L8\Am{\]<u PL_M>[4`DBMο[/.bw?䃟U:"#2}w9~xf *gGbtKOn/Qreu#v6COsͣ{6 ~ B5yƅ+ b(SDK 1ێ( ˳:cjJKlys߅YC @˙@ۊj-G ù)/wb?B1&ouA4-WK3ҝ/z>ÒWD1=T_RMtWubpZ{eA5<:n-MwÃMT_RQb拉Ǩ Ӵ×Ah)4-FRҴT0@KrbiWW%pE|]O)؋b9#WO.igtn /HS7Kઢ p*SLr /m\>)EzW᪞j,WQ?@lvn|Xx,)y"~dTo%}н|%F7Վk@A/MYM6LKcIJR頓PE(>YPDd)`TR3JY.ll?H:n w {ZOjA k4#71dA {2_tazY}t2x3t\jiQpD]dY) )@[͋1ݱ'jI> g5"FeuSuumoD\d'ңWD[v sƁ"}V dp2aecdsW[C1fv+8 sJ\?mxS7iOt ZI~Iz:IIz:Z5y7NR Nb[@8VUPJDUix-),x0##eg4A)鞪VݪU$zI}^Eg4 (1m 12006ms (00:06:34.468) Feb 19 00:06:34 crc kubenswrapper[4889]: Trace[1855516623]: [12.006834649s] [12.006834649s] END Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.469172 4889 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.469305 4889 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.483036 4889 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.498750 4889 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42714->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.498850 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:42714->192.168.126.11:17697: read: connection reset by peer" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566787 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566839 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566868 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566891 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566914 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566940 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566963 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.566988 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567048 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567071 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567093 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567178 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567203 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567248 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567272 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567299 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567306 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567349 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567379 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567405 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567433 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567457 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567481 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567507 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567530 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567555 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567577 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567595 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567600 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567665 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567695 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567717 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567738 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567759 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567780 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567801 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567820 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567842 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567867 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567888 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567910 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567931 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567955 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.567979 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568004 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568035 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568116 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568110 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568147 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568173 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568199 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568243 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568271 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568296 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568320 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568342 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568365 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568387 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568407 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568433 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568455 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568479 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568502 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568529 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568553 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568577 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568604 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568630 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568654 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568679 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568697 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568705 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568765 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568791 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568818 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568827 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568847 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568874 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568901 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568926 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568950 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.568972 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569036 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569062 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569088 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569110 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569140 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569163 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569189 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569214 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569256 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569278 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569300 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569324 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569324 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569348 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569373 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569402 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569429 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569453 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569475 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569498 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569522 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569544 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569566 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569611 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569635 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569658 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569681 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569687 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569713 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569741 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569768 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569794 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569820 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569846 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569870 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569889 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569894 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569973 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.569995 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570026 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570123 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570124 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570155 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570183 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570209 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570285 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570285 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570312 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570337 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570361 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570391 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570418 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570428 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570442 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570471 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570498 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570523 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570546 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570563 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570572 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570655 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570703 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570722 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570742 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570780 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570818 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570858 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570891 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570926 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570960 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.570996 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571021 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571030 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571016 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571069 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571187 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571326 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571352 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571383 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571416 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571412 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571497 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571546 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571569 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571593 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571608 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571633 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571658 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571702 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571726 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571746 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571784 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571813 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571833 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571834 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571874 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571898 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571918 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571939 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571955 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571978 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.571998 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572041 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572058 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572056 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572077 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572094 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572149 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572213 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572268 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572298 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572328 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572353 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572380 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572403 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572477 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572506 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572531 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572540 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572556 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572592 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572592 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572655 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572709 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572751 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572800 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572840 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572882 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572919 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572960 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.572996 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.573056 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.573082 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.573090 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.573320 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.573478 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.575711 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576380 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576424 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576454 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576633 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576855 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577136 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577302 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577460 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.578016 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576614 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576770 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576864 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577019 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577183 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577016 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.579870 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577292 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577324 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577650 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.577770 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.578302 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:06:35.078171582 +0000 UTC m=+21.042836593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.578551 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.576949 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.579863 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.580159 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.580293 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.580387 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.581327 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.581466 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.581580 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.582994 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.583371 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.584274 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.584607 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.584748 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.584771 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585036 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585310 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585414 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585469 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585466 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585722 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585847 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.585876 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.586356 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.586385 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.586834 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.586878 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.587109 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.587826 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.581604 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.588265 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.588655 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.590782 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.590907 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.591124 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.591304 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.591624 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.591734 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592453 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592557 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592653 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592729 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592809 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592885 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592973 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.593066 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.591818 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592444 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592464 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.592463 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.593397 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.594187 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.594310 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595436 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595459 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595485 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595503 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595521 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595539 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595553 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.595555 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595568 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595588 4889 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595604 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595619 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.595636 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:35.095616466 +0000 UTC m=+21.060281457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.593069 4889 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595668 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.595726 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595739 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595755 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595768 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.595800 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:35.095792491 +0000 UTC m=+21.060457472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595814 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595828 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595838 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595852 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595863 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595875 4889 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595887 4889 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595897 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595908 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595918 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595928 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595938 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595949 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595962 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595972 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595984 4889 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595993 4889 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596003 4889 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.595996 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596014 4889 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596107 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596129 4889 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596149 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596167 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596184 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596204 4889 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596248 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596268 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596284 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596300 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596317 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596336 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596350 4889 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596367 4889 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596383 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596398 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596414 4889 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596466 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596488 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596774 4889 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.596796 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598034 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598054 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598070 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598086 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598101 4889 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598115 4889 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598129 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598148 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.598162 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609452 4889 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609489 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609508 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609528 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609543 4889 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609617 4889 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609645 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609660 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609679 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609695 4889 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609707 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609719 4889 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609731 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609743 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609756 4889 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609769 4889 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609782 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609796 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609807 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609818 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.602099 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.602660 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.603877 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.604321 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.604779 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.605077 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.605994 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.606517 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.606554 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.607210 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.607668 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.609955 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.609968 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.610030 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:35.110007406 +0000 UTC m=+21.074672397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.606107 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.599279 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.607778 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.607786 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.608333 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.608366 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.608661 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.608903 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609078 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609402 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.610168 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.609794 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.610138 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.610442 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.610476 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.610497 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: E0219 00:06:34.610566 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:35.110533231 +0000 UTC m=+21.075198422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.610833 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.611101 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.612184 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.612783 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.613614 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.614369 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.615393 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.617249 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.617681 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.617734 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.617765 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.617843 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.619030 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.619159 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.619192 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.619692 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.620111 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.620468 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.620785 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.621047 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.622532 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.622774 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.622956 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.623417 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.623550 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.623747 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.623927 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624037 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624098 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624177 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624307 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624304 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624401 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624440 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.624691 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.625275 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.625338 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.625333 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.625362 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.625857 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626059 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626214 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626335 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626556 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626884 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626939 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626956 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.626969 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.627091 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.627500 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.628036 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.628063 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.628318 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.628570 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.628617 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.628724 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.629012 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.633274 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.633595 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.633668 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.633650 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.634259 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.634732 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.634861 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.635769 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.635949 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.636044 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.636122 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.637190 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.637413 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.637574 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.637762 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.637852 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.637939 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.638155 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.638247 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.638993 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.639160 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.639237 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.639593 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.640846 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.640945 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.644060 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.644213 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.646901 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.647140 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.647248 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.647697 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.648614 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.649609 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.650934 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.663747 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.664965 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.665272 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.695641 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710349 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710421 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710472 4889 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710471 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710486 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710528 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710550 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710570 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710584 4889 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710597 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710610 4889 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710624 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710637 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710649 4889 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710661 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710672 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710684 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710696 4889 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710710 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710734 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: W0219 00:06:34.710691 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-0ac000c91f1d983727fea2cf79f2f18b2c663e904c87936cfadda70d1858855b WatchSource:0}: Error finding container 0ac000c91f1d983727fea2cf79f2f18b2c663e904c87936cfadda70d1858855b: Status 404 returned error can't find the container with id 0ac000c91f1d983727fea2cf79f2f18b2c663e904c87936cfadda70d1858855b Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710749 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710809 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710824 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710839 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710854 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710869 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710884 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710900 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710914 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710927 4889 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710941 4889 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710954 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710972 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710985 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.710997 4889 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711009 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711023 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711037 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711050 4889 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711061 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711074 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711087 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711099 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711111 4889 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711124 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711137 4889 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711150 4889 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711162 4889 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711175 4889 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711188 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711200 4889 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711211 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711242 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711255 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711270 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711285 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711298 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711309 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711322 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711333 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711345 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711357 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711368 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711381 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711392 4889 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711404 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711416 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711427 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711441 4889 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711453 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711464 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711476 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711488 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711499 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711516 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711530 4889 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711541 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711553 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711564 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711574 4889 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711587 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711598 4889 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711609 4889 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711620 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711633 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711646 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711659 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.711701 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712352 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712441 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712465 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712483 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712498 4889 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712512 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712570 4889 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712592 4889 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712612 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712633 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712649 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712662 4889 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712676 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712689 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712702 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712714 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712726 4889 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712739 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712751 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712766 4889 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712778 4889 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712924 4889 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712964 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.712984 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.713003 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.713021 4889 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.713814 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.729107 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.730101 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.731677 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.732720 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.733916 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.734889 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.735026 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:45:21.796519752 +0000 UTC Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.735894 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.737140 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.737123 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.737922 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.739256 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.739911 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.741149 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.741642 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.742139 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.743088 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.743695 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.744643 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.745011 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.745564 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.746664 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.747127 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.748167 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.748609 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.748616 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.749616 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.750010 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.750607 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.751697 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.753816 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.754611 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.755765 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.756388 4889 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.756536 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.759917 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.760573 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.760976 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.762797 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.762881 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.764578 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.765343 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.766504 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.767211 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.768110 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.768739 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.769789 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.770812 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.771354 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.771892 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.772928 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.774104 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.774628 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.774935 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.775157 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.776186 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.776812 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.777882 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.778360 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.788503 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.800902 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.846667 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616"} Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.846725 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0ac000c91f1d983727fea2cf79f2f18b2c663e904c87936cfadda70d1858855b"} Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.848608 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.851092 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47" exitCode=255 Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.851129 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47"} Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.863540 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.874104 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.886336 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.888286 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.888286 4889 scope.go:117] "RemoveContainer" containerID="bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.903536 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.913963 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.924592 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.965745 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.970847 4889 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 00:06:34 crc kubenswrapper[4889]: I0219 00:06:34.982940 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 00:06:34 crc kubenswrapper[4889]: W0219 00:06:34.996107 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3382824ab1f4bdb890c87a1f54a6a1feb66e43685c12eff93d979d68f393db6f WatchSource:0}: Error finding container 3382824ab1f4bdb890c87a1f54a6a1feb66e43685c12eff93d979d68f393db6f: Status 404 returned error can't find the container with id 3382824ab1f4bdb890c87a1f54a6a1feb66e43685c12eff93d979d68f393db6f Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.118348 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.118423 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.118454 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.118475 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.118491 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118546 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:06:36.118519916 +0000 UTC m=+22.083184907 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118583 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118601 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118643 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:36.11862981 +0000 UTC m=+22.083294801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118646 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118669 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118683 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118685 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:36.118667041 +0000 UTC m=+22.083332032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118738 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:36.118726263 +0000 UTC m=+22.083391454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118775 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118818 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118827 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.118861 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:36.118851006 +0000 UTC m=+22.083515997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.240185 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.257391 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.272670 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.293637 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.307645 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.318369 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.331597 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.347196 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.635763 4889 csr.go:261] certificate signing request csr-zdl4b is approved, waiting to be issued Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.684913 4889 csr.go:257] certificate signing request csr-zdl4b is issued Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.724685 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.724829 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.724919 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.724978 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.725032 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:35 crc kubenswrapper[4889]: E0219 00:06:35.725082 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.735780 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 06:14:06.358549741 +0000 UTC Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.854398 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24"} Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.854442 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"70472ca888eea7b45512fbd5fbca86b423ce71045a9a272f9dccb51658c51c68"} Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.856943 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.858266 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5"} Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.858806 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.860480 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16"} Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.861814 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3382824ab1f4bdb890c87a1f54a6a1feb66e43685c12eff93d979d68f393db6f"} Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.864957 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.900291 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.932280 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.948027 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.968156 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:35 crc kubenswrapper[4889]: I0219 00:06:35.996920 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.019111 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.034736 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.049814 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.063797 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.078885 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.102187 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.127571 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.127667 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.127700 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.127724 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.127743 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.127836 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:06:38.127807691 +0000 UTC m=+24.092472682 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.127863 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.127900 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128471 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:38.127946884 +0000 UTC m=+24.092611875 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128522 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128605 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128627 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128564 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:38.128512442 +0000 UTC m=+24.093177433 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128683 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128704 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128742 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:38.128709878 +0000 UTC m=+24.093374879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128746 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:36 crc kubenswrapper[4889]: E0219 00:06:36.128821 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:38.128810831 +0000 UTC m=+24.093475822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.168203 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-x8h8n"] Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.168551 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.171118 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.171678 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.172666 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.183205 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.216176 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.229124 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13343f8d-046b-4e45-8424-a240f34a9667-hosts-file\") pod \"node-resolver-x8h8n\" (UID: \"13343f8d-046b-4e45-8424-a240f34a9667\") " pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.229181 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tj5m\" (UniqueName: \"kubernetes.io/projected/13343f8d-046b-4e45-8424-a240f34a9667-kube-api-access-8tj5m\") pod \"node-resolver-x8h8n\" (UID: \"13343f8d-046b-4e45-8424-a240f34a9667\") " pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.233799 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.248596 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.262246 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.275166 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.291261 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.312152 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.329415 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.329601 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13343f8d-046b-4e45-8424-a240f34a9667-hosts-file\") pod \"node-resolver-x8h8n\" (UID: \"13343f8d-046b-4e45-8424-a240f34a9667\") " pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.329673 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tj5m\" (UniqueName: \"kubernetes.io/projected/13343f8d-046b-4e45-8424-a240f34a9667-kube-api-access-8tj5m\") pod \"node-resolver-x8h8n\" (UID: \"13343f8d-046b-4e45-8424-a240f34a9667\") " pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.329726 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/13343f8d-046b-4e45-8424-a240f34a9667-hosts-file\") pod \"node-resolver-x8h8n\" (UID: \"13343f8d-046b-4e45-8424-a240f34a9667\") " pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.342005 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.353633 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tj5m\" (UniqueName: \"kubernetes.io/projected/13343f8d-046b-4e45-8424-a240f34a9667-kube-api-access-8tj5m\") pod \"node-resolver-x8h8n\" (UID: \"13343f8d-046b-4e45-8424-a240f34a9667\") " pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.360496 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.481155 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-x8h8n" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.553626 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-pcmlw"] Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.554204 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-qmhk6"] Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.554406 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.554670 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.556358 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-2vx4q"] Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.556882 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.562691 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563198 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563209 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563326 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563672 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563749 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4nwjd"] Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563805 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563921 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.563964 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.564009 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.564280 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.564516 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.564592 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.569718 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.569783 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.570574 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.570654 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.570729 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.570747 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.570789 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.570952 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.580213 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.610719 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.626795 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632415 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-systemd-units\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632446 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-netns\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632463 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-env-overrides\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632491 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-cnibin\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632508 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632526 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-cnibin\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632542 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-k8s-cni-cncf-io\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632558 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-kubelet\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632576 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-log-socket\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632594 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-socket-dir-parent\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632608 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-etc-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632712 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27x5\" (UniqueName: \"kubernetes.io/projected/900d194e-937f-4a59-abba-21ed9f94f24f-kube-api-access-s27x5\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632786 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-system-cni-dir\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632819 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-os-release\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.632868 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-cni-bin\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633030 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmd6z\" (UniqueName: \"kubernetes.io/projected/3f91d278-7461-4166-8613-4b78aa4e93be-kube-api-access-mmd6z\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633106 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/900d194e-937f-4a59-abba-21ed9f94f24f-proxy-tls\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633156 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-cni-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633171 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-cni-binary-copy\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633196 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/900d194e-937f-4a59-abba-21ed9f94f24f-rootfs\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633215 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633253 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-system-cni-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633269 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-kubelet\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633287 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-script-lib\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633305 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-cni-multus\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633322 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-bin\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633372 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-netns\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633406 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-ovn-kubernetes\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633431 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-config\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633523 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-systemd\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633594 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-var-lib-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633633 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-node-log\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633656 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-netd\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633682 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3f91d278-7461-4166-8613-4b78aa4e93be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633721 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-daemon-config\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633746 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-os-release\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633767 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-slash\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633788 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-ovn\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633825 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/900d194e-937f-4a59-abba-21ed9f94f24f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633854 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f91d278-7461-4166-8613-4b78aa4e93be-cni-binary-copy\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633882 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-etc-kubernetes\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633904 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633927 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lssf6\" (UniqueName: \"kubernetes.io/projected/707d1219-7187-4fda-b155-e6d64687b190-kube-api-access-lssf6\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633951 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-multus-certs\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633974 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-hostroot\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.633996 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-conf-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.634042 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srlck\" (UniqueName: \"kubernetes.io/projected/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-kube-api-access-srlck\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.634065 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/707d1219-7187-4fda-b155-e6d64687b190-ovn-node-metrics-cert\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.652482 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.667883 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.684251 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.686152 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 00:01:35 +0000 UTC, rotation deadline is 2026-12-12 04:58:30.78717349 +0000 UTC Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.686227 4889 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7108h51m54.100961134s for next certificate rotation Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.701987 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.734847 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-k8s-cni-cncf-io\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.734900 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-kubelet\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.734962 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-log-socket\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.734974 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-k8s-cni-cncf-io\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735032 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-log-socket\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.734988 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s27x5\" (UniqueName: \"kubernetes.io/projected/900d194e-937f-4a59-abba-21ed9f94f24f-kube-api-access-s27x5\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735132 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-system-cni-dir\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.734979 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-kubelet\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735183 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-socket-dir-parent\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735244 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-etc-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735209 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-system-cni-dir\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735283 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-os-release\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735298 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-etc-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735341 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-cni-bin\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735384 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmd6z\" (UniqueName: \"kubernetes.io/projected/3f91d278-7461-4166-8613-4b78aa4e93be-kube-api-access-mmd6z\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735437 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/900d194e-937f-4a59-abba-21ed9f94f24f-proxy-tls\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735488 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735503 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-socket-dir-parent\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735512 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-cni-bin\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735523 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-cni-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735598 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-os-release\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735623 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-cni-binary-copy\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735659 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/900d194e-937f-4a59-abba-21ed9f94f24f-rootfs\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735691 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-system-cni-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735715 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/900d194e-937f-4a59-abba-21ed9f94f24f-rootfs\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735716 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-kubelet\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735742 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-cni-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735759 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-script-lib\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735749 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-kubelet\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735797 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-cni-multus\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735817 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-bin\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735836 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-system-cni-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735839 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-netns\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735861 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-netns\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735885 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-bin\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735899 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-ovn-kubernetes\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735927 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-config\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735936 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-ovn-kubernetes\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735879 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-var-lib-cni-multus\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735929 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:11:37.556107575 +0000 UTC Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735955 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-systemd\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736027 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-daemon-config\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736077 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-var-lib-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736104 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-node-log\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736129 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-netd\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736171 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3f91d278-7461-4166-8613-4b78aa4e93be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736197 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/900d194e-937f-4a59-abba-21ed9f94f24f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736253 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f91d278-7461-4166-8613-4b78aa4e93be-cni-binary-copy\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736284 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-os-release\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736311 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-slash\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736361 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-ovn\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736392 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-multus-certs\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736418 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-etc-kubernetes\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736446 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736528 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lssf6\" (UniqueName: \"kubernetes.io/projected/707d1219-7187-4fda-b155-e6d64687b190-kube-api-access-lssf6\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736556 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-hostroot\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736581 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-conf-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736603 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srlck\" (UniqueName: \"kubernetes.io/projected/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-kube-api-access-srlck\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736630 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/707d1219-7187-4fda-b155-e6d64687b190-ovn-node-metrics-cert\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736648 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-env-overrides\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736669 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-systemd-units\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-netns\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736726 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-cnibin\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736752 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736775 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-cnibin\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736861 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-cnibin\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.735981 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-systemd\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736877 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-script-lib\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736900 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-cni-binary-copy\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736936 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-node-log\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736915 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-var-lib-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736948 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-config\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.736972 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-netd\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737002 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-hostroot\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737025 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-systemd-units\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737050 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-netns\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737105 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-cnibin\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737132 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737155 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-slash\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737279 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-ovn\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737328 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-host-run-multus-certs\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737363 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-etc-kubernetes\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737398 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-openvswitch\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737469 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-daemon-config\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737516 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-multus-conf-dir\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737540 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-env-overrides\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737733 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3f91d278-7461-4166-8613-4b78aa4e93be-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.737829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-os-release\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.738160 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3f91d278-7461-4166-8613-4b78aa4e93be-cni-binary-copy\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.738310 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3f91d278-7461-4166-8613-4b78aa4e93be-tuning-conf-dir\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.738416 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/900d194e-937f-4a59-abba-21ed9f94f24f-mcd-auth-proxy-config\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.739616 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.747099 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/900d194e-937f-4a59-abba-21ed9f94f24f-proxy-tls\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.757497 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lssf6\" (UniqueName: \"kubernetes.io/projected/707d1219-7187-4fda-b155-e6d64687b190-kube-api-access-lssf6\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.760450 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.760702 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmd6z\" (UniqueName: \"kubernetes.io/projected/3f91d278-7461-4166-8613-4b78aa4e93be-kube-api-access-mmd6z\") pod \"multus-additional-cni-plugins-2vx4q\" (UID: \"3f91d278-7461-4166-8613-4b78aa4e93be\") " pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.763764 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srlck\" (UniqueName: \"kubernetes.io/projected/7dcfc583-b6f2-415a-a4f0-adb70f4865c8-kube-api-access-srlck\") pod \"multus-qmhk6\" (UID: \"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\") " pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.764582 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/707d1219-7187-4fda-b155-e6d64687b190-ovn-node-metrics-cert\") pod \"ovnkube-node-4nwjd\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.769037 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s27x5\" (UniqueName: \"kubernetes.io/projected/900d194e-937f-4a59-abba-21ed9f94f24f-kube-api-access-s27x5\") pod \"machine-config-daemon-pcmlw\" (UID: \"900d194e-937f-4a59-abba-21ed9f94f24f\") " pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.784813 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.806758 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.823878 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.836974 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.858942 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.865720 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x8h8n" event={"ID":"13343f8d-046b-4e45-8424-a240f34a9667","Type":"ContainerStarted","Data":"795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677"} Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.866611 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-x8h8n" event={"ID":"13343f8d-046b-4e45-8424-a240f34a9667","Type":"ContainerStarted","Data":"9a3ad2d842f4adc8345dd3cb1a9f9bba0edede4449ec9e4a0c074bd4b0730e42"} Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.873701 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-qmhk6" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.875017 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.884400 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:06:36 crc kubenswrapper[4889]: W0219 00:06:36.885577 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dcfc583_b6f2_415a_a4f0_adb70f4865c8.slice/crio-2973096b50c90540355c865e8d31cccfe3565fa790142713fea00cd364529539 WatchSource:0}: Error finding container 2973096b50c90540355c865e8d31cccfe3565fa790142713fea00cd364529539: Status 404 returned error can't find the container with id 2973096b50c90540355c865e8d31cccfe3565fa790142713fea00cd364529539 Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.894868 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.905661 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.906189 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.922231 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: W0219 00:06:36.922396 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod707d1219_7187_4fda_b155_e6d64687b190.slice/crio-e81efd83b50afae8905567e1aca0dbdca2a1b40443de06e497480eb4746ac0b5 WatchSource:0}: Error finding container e81efd83b50afae8905567e1aca0dbdca2a1b40443de06e497480eb4746ac0b5: Status 404 returned error can't find the container with id e81efd83b50afae8905567e1aca0dbdca2a1b40443de06e497480eb4746ac0b5 Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.940361 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:36 crc kubenswrapper[4889]: I0219 00:06:36.954569 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.001590 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:36Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.042474 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.072257 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.090482 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.111967 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.129144 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.148648 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.169056 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.186244 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.203037 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.220632 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.238439 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.262771 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.277181 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.292296 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.351151 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.361453 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.362891 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.368029 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.386707 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.402812 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.426466 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.443377 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.459886 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.479954 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.511458 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.539805 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.568169 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.597598 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.629540 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.659314 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.691528 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.718452 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.724169 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.724169 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.724200 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:37 crc kubenswrapper[4889]: E0219 00:06:37.724819 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:37 crc kubenswrapper[4889]: E0219 00:06:37.724947 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:37 crc kubenswrapper[4889]: E0219 00:06:37.725057 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.735895 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.736101 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:21:45.056543995 +0000 UTC Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.751513 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.771329 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.788994 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.803313 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.818890 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.833345 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.849956 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.862196 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.870995 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" exitCode=0 Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.871115 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.871183 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"e81efd83b50afae8905567e1aca0dbdca2a1b40443de06e497480eb4746ac0b5"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.874486 4889 generic.go:334] "Generic (PLEG): container finished" podID="3f91d278-7461-4166-8613-4b78aa4e93be" containerID="99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f" exitCode=0 Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.874546 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerDied","Data":"99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.874568 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerStarted","Data":"a414ea7a3d52feaffac0c7df406beb9056e538ff9bbb8cef999c95eb2f65f144"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.877090 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.877140 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.877152 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"108c7d8deb38840827d6c6bf1de56ad380df8d8d9662e3242998cbb3806ce96c"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.878983 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerStarted","Data":"af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.879020 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerStarted","Data":"2973096b50c90540355c865e8d31cccfe3565fa790142713fea00cd364529539"} Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.885817 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.911312 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.932258 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.961905 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.982703 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:37 crc kubenswrapper[4889]: I0219 00:06:37.994603 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:37Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.008155 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.028711 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.046977 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.068099 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.083779 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.119367 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.151849 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.151937 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.151963 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152113 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152130 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152142 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.152207 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152244 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152319 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152333 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:06:42.152260568 +0000 UTC m=+28.116925559 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152398 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:42.152379302 +0000 UTC m=+28.117044293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.152448 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152354 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152699 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152706 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:42.152684791 +0000 UTC m=+28.117349982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152262 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152753 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:42.152744383 +0000 UTC m=+28.117409574 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: E0219 00:06:38.152773 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:42.152762593 +0000 UTC m=+28.117427804 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.156762 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.193424 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.736362 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:27:30.560777915 +0000 UTC Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.883826 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.886516 4889 generic.go:334] "Generic (PLEG): container finished" podID="3f91d278-7461-4166-8613-4b78aa4e93be" containerID="aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8" exitCode=0 Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.886575 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerDied","Data":"aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.889842 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.889880 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.889893 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.889904 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.889914 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.910376 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.922898 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.936851 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.952506 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.971822 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:38 crc kubenswrapper[4889]: I0219 00:06:38.988675 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.002361 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.022366 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.040768 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.054622 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.066123 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.083061 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.114756 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.133276 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.150581 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.168742 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.184420 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.202288 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.217539 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.220940 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-mpwsr"] Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.221306 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.223982 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.225160 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.225690 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.225796 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.242424 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.264958 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-host\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.265235 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-serviceca\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.265295 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldxf\" (UniqueName: \"kubernetes.io/projected/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-kube-api-access-2ldxf\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.265358 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.279957 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.301691 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.318202 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.342475 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.357750 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.366936 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-host\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.366983 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-serviceca\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.367038 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldxf\" (UniqueName: \"kubernetes.io/projected/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-kube-api-access-2ldxf\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.367428 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-host\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.369102 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-serviceca\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.378125 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.393626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldxf\" (UniqueName: \"kubernetes.io/projected/fb6fe6a2-a58a-4630-9df9-f7840e5088ed-kube-api-access-2ldxf\") pod \"node-ca-mpwsr\" (UID: \"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\") " pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.418253 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.454444 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.494017 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.534190 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.574857 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.610466 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.625203 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-mpwsr" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.658710 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.696550 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.724393 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.724424 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.724508 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:39 crc kubenswrapper[4889]: E0219 00:06:39.724549 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:39 crc kubenswrapper[4889]: E0219 00:06:39.724707 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:39 crc kubenswrapper[4889]: E0219 00:06:39.724979 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.737370 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:23:47.910124333 +0000 UTC Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.740443 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.772803 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.809719 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.850668 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.892610 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.896384 4889 generic.go:334] "Generic (PLEG): container finished" podID="3f91d278-7461-4166-8613-4b78aa4e93be" containerID="e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc" exitCode=0 Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.896492 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerDied","Data":"e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc"} Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.902673 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.904296 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mpwsr" event={"ID":"fb6fe6a2-a58a-4630-9df9-f7840e5088ed","Type":"ContainerStarted","Data":"99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984"} Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.904384 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-mpwsr" event={"ID":"fb6fe6a2-a58a-4630-9df9-f7840e5088ed","Type":"ContainerStarted","Data":"8c014ce2e70752bc56fec1d7b121059cd33c09ca00c488a711a0c1a5794024cf"} Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.934331 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:39 crc kubenswrapper[4889]: I0219 00:06:39.975949 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.013176 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.052626 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.091584 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.131722 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.170625 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.211739 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.253630 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.290938 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.332421 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.377086 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.431664 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.451617 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.493053 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.531510 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.571328 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.612703 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.654816 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.689131 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.733414 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.737893 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 04:45:58.173830525 +0000 UTC Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.772191 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.812530 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.854695 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.870400 4889 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.872695 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.872750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.872760 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.872897 4889 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.893943 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.912985 4889 generic.go:334] "Generic (PLEG): container finished" podID="3f91d278-7461-4166-8613-4b78aa4e93be" containerID="9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364" exitCode=0 Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.913066 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerDied","Data":"9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364"} Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.945176 4889 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.945592 4889 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.947044 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.947126 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.947142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.947163 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.947181 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:40Z","lastTransitionTime":"2026-02-19T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.971244 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: E0219 00:06:40.980205 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.985917 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.985978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.985994 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.986017 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:40 crc kubenswrapper[4889]: I0219 00:06:40.986030 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:40Z","lastTransitionTime":"2026-02-19T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.000410 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.005529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.005573 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.005585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.005605 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.005617 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.014796 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.018019 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.022542 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.022672 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.022690 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.022737 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.022754 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.036298 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.042463 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.042496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.042510 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.042529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.042541 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.052541 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.060080 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.060247 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.063885 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.063914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.063923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.063941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.063952 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.093736 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.134105 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.167348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.167382 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.167395 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.167411 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.167421 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.173318 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.213072 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.253999 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.270193 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.270266 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.270279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.270298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.270310 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.293846 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.337504 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.372811 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.373400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.373460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.373475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.373496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.373511 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.410373 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.454195 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.475934 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.475996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.476010 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.476038 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.476054 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.491670 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.537192 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.574546 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.578672 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.578736 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.578750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.578779 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.578793 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.613577 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.682416 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.682480 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.682495 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.682519 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.682531 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.724586 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.724741 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.724786 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.725066 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.725058 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:41 crc kubenswrapper[4889]: E0219 00:06:41.725208 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.738283 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:18:08.156955883 +0000 UTC Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.785514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.785569 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.785583 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.785605 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.785620 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.889268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.889767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.889777 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.889794 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.889804 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.919956 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerStarted","Data":"5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.925212 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.935164 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.950389 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.962851 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.978003 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.993198 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.993273 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.993288 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.993314 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:41 crc kubenswrapper[4889]: I0219 00:06:41.993329 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:41Z","lastTransitionTime":"2026-02-19T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.000089 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:41Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.012100 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.029956 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.047243 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.064566 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.082063 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.096876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.096921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.096931 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.096947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.096958 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.099985 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.115874 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.132360 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.175086 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.198451 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.198721 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:06:50.198685969 +0000 UTC m=+36.163350970 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.198974 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.199289 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.199515 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.199241 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.199787 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.199816 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.199890 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:50.199872876 +0000 UTC m=+36.164537897 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.199456 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.200098 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.199626 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.199725 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.200142 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:50.200091382 +0000 UTC m=+36.164756443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.200256 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:50.200245676 +0000 UTC m=+36.164910867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.200165 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.200491 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:42 crc kubenswrapper[4889]: E0219 00:06:42.200589 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:06:50.200569616 +0000 UTC m=+36.165234607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.201994 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.202043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.202056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.202077 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.202089 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.306201 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.306270 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.306283 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.306307 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.306322 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.409088 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.409142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.409153 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.409174 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.409187 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.512266 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.512336 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.512353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.512367 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.512378 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.614993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.615068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.615082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.615105 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.615118 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.718258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.718310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.718326 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.718352 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.718366 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.738685 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:44:27.858263886 +0000 UTC Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.820976 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.821032 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.821046 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.821067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.821082 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.923383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.923807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.923910 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.924058 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.924136 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:42Z","lastTransitionTime":"2026-02-19T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.931314 4889 generic.go:334] "Generic (PLEG): container finished" podID="3f91d278-7461-4166-8613-4b78aa4e93be" containerID="5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270" exitCode=0 Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.931357 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerDied","Data":"5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270"} Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.948511 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.969443 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:42 crc kubenswrapper[4889]: I0219 00:06:42.983696 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:42Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.006304 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.020838 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.026274 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.026319 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.026329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.026348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.026358 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.032935 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.045753 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.059523 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.075611 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.088670 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.103742 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.118676 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.133171 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.134140 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.134259 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.134281 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.134308 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.134336 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.198151 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.237919 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.237971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.237981 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.238002 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.238013 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.341475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.341535 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.341545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.341570 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.341595 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.448748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.448804 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.448818 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.448846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.448861 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.552178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.552269 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.552286 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.552314 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.552340 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.654894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.654948 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.654958 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.654976 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.654987 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.724625 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.724622 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:43 crc kubenswrapper[4889]: E0219 00:06:43.724763 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.724651 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:43 crc kubenswrapper[4889]: E0219 00:06:43.724898 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:43 crc kubenswrapper[4889]: E0219 00:06:43.725018 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.739200 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 06:25:23.105476008 +0000 UTC Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.757685 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.757760 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.757779 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.757806 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.757822 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.863197 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.863293 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.863311 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.863332 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.863346 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.939074 4889 generic.go:334] "Generic (PLEG): container finished" podID="3f91d278-7461-4166-8613-4b78aa4e93be" containerID="436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40" exitCode=0 Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.939165 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerDied","Data":"436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.947683 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.948018 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.948124 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.974509 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.974561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.974570 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.974590 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.974600 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:43Z","lastTransitionTime":"2026-02-19T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.975009 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.979192 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:43 crc kubenswrapper[4889]: I0219 00:06:43.990113 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.025699 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.062830 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.097512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.097571 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.097588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.097611 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.097628 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.113948 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.127112 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.142262 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.159570 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.172703 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.188208 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.201819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.201865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.201879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.201904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.201925 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.203414 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.217069 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.231379 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.248434 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.265440 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.278396 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.290607 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.305409 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.305451 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.305462 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.305482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.305496 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.307660 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.324709 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.348146 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.364203 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.379597 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.402607 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.408785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.409014 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.409027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.409048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.409063 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.415390 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.432073 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.446147 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.460834 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.476867 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.511960 4889 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.512789 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.512900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.512972 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.513061 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.513140 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.616107 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.616178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.616201 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.616257 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.616281 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.719239 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.719660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.719775 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.719854 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.719930 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.738407 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.739342 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:36:37.149417807 +0000 UTC Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.758287 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.775453 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.807391 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.823299 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.823343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.823355 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.823374 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.823387 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.827649 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.844709 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.859675 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.876344 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.895702 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.911948 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.926795 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.926844 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.926855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.926876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.926890 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:44Z","lastTransitionTime":"2026-02-19T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.928058 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.941291 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.956019 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" event={"ID":"3f91d278-7461-4166-8613-4b78aa4e93be","Type":"ContainerStarted","Data":"59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f"} Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.956576 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.962772 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.986136 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:44 crc kubenswrapper[4889]: I0219 00:06:44.986987 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.010240 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.026129 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.030173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.030367 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.030433 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.030502 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.030572 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.042750 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.065674 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.079638 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.091971 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.103043 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.117701 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.134588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.134645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.134656 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.134677 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.134689 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.139979 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.153567 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.170557 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.186041 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.199278 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.217846 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.236949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.237009 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.237025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.237051 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.237068 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.344328 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.344379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.344391 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.344408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.344420 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.447153 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.447206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.447238 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.447329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.447343 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.550880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.550941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.551004 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.551036 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.551056 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.653881 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.653941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.653953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.653977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.653990 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.724602 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.724646 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.724716 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:45 crc kubenswrapper[4889]: E0219 00:06:45.724809 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:45 crc kubenswrapper[4889]: E0219 00:06:45.724905 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:45 crc kubenswrapper[4889]: E0219 00:06:45.724999 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.740590 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:52:44.561188095 +0000 UTC Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.757281 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.757324 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.757334 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.757355 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.757373 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.860713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.860763 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.860778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.860801 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.860814 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.963577 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.963621 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.963634 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.963650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:45 crc kubenswrapper[4889]: I0219 00:06:45.963661 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:45Z","lastTransitionTime":"2026-02-19T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.066659 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.066702 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.066713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.066732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.066748 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.168842 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.168880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.168889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.168907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.168918 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.273391 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.273446 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.273478 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.273502 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.273518 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.375981 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.376025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.376035 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.376053 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.376064 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.478452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.478489 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.478500 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.478516 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.478525 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.581900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.581943 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.581953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.581972 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.581983 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.688045 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.688089 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.688103 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.688128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.688138 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.741450 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:01:10.225140833 +0000 UTC Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.791337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.791380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.791391 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.791410 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.791421 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.894581 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.894630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.894645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.894664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.894678 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.997986 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.998064 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.998081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.998104 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:46 crc kubenswrapper[4889]: I0219 00:06:46.998122 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:46Z","lastTransitionTime":"2026-02-19T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.100929 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.100975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.100984 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.101003 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.101018 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.204315 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.204368 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.204383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.204406 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.204420 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.307442 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.307489 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.307499 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.307515 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.307526 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.411362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.411717 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.411795 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.411953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.412045 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.514309 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.514612 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.514846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.514999 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.515114 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.618072 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.618146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.618168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.618198 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.618249 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.721663 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.721698 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.721708 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.721721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.721730 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.724688 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.724710 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:47 crc kubenswrapper[4889]: E0219 00:06:47.724778 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.724710 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:47 crc kubenswrapper[4889]: E0219 00:06:47.724910 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:47 crc kubenswrapper[4889]: E0219 00:06:47.725068 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.742512 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:32:03.962251799 +0000 UTC Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.824834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.824888 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.824906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.824932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.824950 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.927890 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.927945 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.927956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.927983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.927997 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:47Z","lastTransitionTime":"2026-02-19T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.971588 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/0.log" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.975513 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2" exitCode=1 Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.975607 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2"} Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.976762 4889 scope.go:117] "RemoveContainer" containerID="e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2" Feb 19 00:06:47 crc kubenswrapper[4889]: I0219 00:06:47.996157 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:47Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.015802 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.030533 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.030566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.030579 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.030598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.030612 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.036706 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.053654 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.067828 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.087145 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.109839 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:47Z\\\",\\\"message\\\":\\\" 00:06:47.001286 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:06:47.001305 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:06:47.001311 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:06:47.001331 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:06:47.001354 6212 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:06:47.001364 6212 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:06:47.001387 6212 factory.go:656] Stopping watch factory\\\\nI0219 00:06:47.001412 6212 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:06:47.001511 6212 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:06:47.001514 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:06:47.001524 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:06:47.001527 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:06:47.001529 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:06:47.001560 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:06:47.001547 6212 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:06:47.001550 6212 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.126846 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.133328 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.133382 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.133394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.133407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.133416 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.139593 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.155696 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.170534 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.183356 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.196821 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.217692 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:48Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.236625 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.236668 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.236683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.236704 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.236716 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.339901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.339954 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.339968 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.339988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.340002 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.442419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.442490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.442516 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.442547 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.442569 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.546820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.546881 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.546900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.546927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.546949 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.650155 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.650363 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.650400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.650430 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.650453 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.744271 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:58:39.349745734 +0000 UTC Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.753109 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.753147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.753158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.753173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.753185 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.857048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.857099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.857111 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.857130 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.857141 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.959592 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.959637 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.959645 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.959661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.959674 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:48Z","lastTransitionTime":"2026-02-19T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.983986 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/0.log" Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.991720 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896"} Feb 19 00:06:48 crc kubenswrapper[4889]: I0219 00:06:48.992383 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.006296 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.027820 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.042454 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.058636 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.063139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.063330 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.063585 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.063666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.063735 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.072151 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.085509 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.108914 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.123838 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.137062 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.155852 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.166711 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.166774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.166794 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.166822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.166841 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.170304 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.200186 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:47Z\\\",\\\"message\\\":\\\" 00:06:47.001286 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:06:47.001305 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:06:47.001311 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:06:47.001331 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:06:47.001354 6212 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:06:47.001364 6212 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:06:47.001387 6212 factory.go:656] Stopping watch factory\\\\nI0219 00:06:47.001412 6212 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:06:47.001511 6212 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:06:47.001514 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:06:47.001524 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:06:47.001527 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:06:47.001529 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:06:47.001560 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:06:47.001547 6212 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:06:47.001550 6212 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.211681 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.227874 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.270110 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.270182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.270193 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.270208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.270242 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.373657 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.373715 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.373732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.373757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.373767 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.476300 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.476352 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.476365 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.476386 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.476401 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.579713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.579918 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.579938 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.580002 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.580031 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.683988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.684050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.684058 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.684083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.684094 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.724703 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.724762 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.724808 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:49 crc kubenswrapper[4889]: E0219 00:06:49.724870 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:49 crc kubenswrapper[4889]: E0219 00:06:49.724994 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:49 crc kubenswrapper[4889]: E0219 00:06:49.725086 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.730450 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75"] Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.731126 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.735268 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.736062 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.744531 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:27:22.191813817 +0000 UTC Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.751415 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.766752 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.780270 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.787343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.787397 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.787407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.787430 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.787445 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.795807 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.816125 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zr8s\" (UniqueName: \"kubernetes.io/projected/9c2d69f3-4487-470a-b1e9-489e05f244eb-kube-api-access-2zr8s\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.816521 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c2d69f3-4487-470a-b1e9-489e05f244eb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.816654 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c2d69f3-4487-470a-b1e9-489e05f244eb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.816805 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c2d69f3-4487-470a-b1e9-489e05f244eb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.819406 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:47Z\\\",\\\"message\\\":\\\" 00:06:47.001286 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:06:47.001305 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:06:47.001311 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:06:47.001331 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:06:47.001354 6212 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:06:47.001364 6212 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:06:47.001387 6212 factory.go:656] Stopping watch factory\\\\nI0219 00:06:47.001412 6212 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:06:47.001511 6212 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:06:47.001514 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:06:47.001524 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:06:47.001527 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:06:47.001529 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:06:47.001560 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:06:47.001547 6212 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:06:47.001550 6212 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.831535 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.847538 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.868193 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.883603 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.889654 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.889803 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.889911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.890025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.890135 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.904283 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.917878 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c2d69f3-4487-470a-b1e9-489e05f244eb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.917947 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zr8s\" (UniqueName: \"kubernetes.io/projected/9c2d69f3-4487-470a-b1e9-489e05f244eb-kube-api-access-2zr8s\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.918040 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c2d69f3-4487-470a-b1e9-489e05f244eb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.918078 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c2d69f3-4487-470a-b1e9-489e05f244eb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.918881 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c2d69f3-4487-470a-b1e9-489e05f244eb-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.919010 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c2d69f3-4487-470a-b1e9-489e05f244eb-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.923542 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.926807 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c2d69f3-4487-470a-b1e9-489e05f244eb-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.938575 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zr8s\" (UniqueName: \"kubernetes.io/projected/9c2d69f3-4487-470a-b1e9-489e05f244eb-kube-api-access-2zr8s\") pod \"ovnkube-control-plane-749d76644c-lvs75\" (UID: \"9c2d69f3-4487-470a-b1e9-489e05f244eb\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.948072 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.961909 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.978521 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.993227 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.993277 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.993290 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.993316 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.993329 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:49Z","lastTransitionTime":"2026-02-19T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.997089 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/1.log" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.996114 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:49 crc kubenswrapper[4889]: I0219 00:06:49.997911 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/0.log" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.001189 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896" exitCode=1 Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.001251 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.001299 4889 scope.go:117] "RemoveContainer" containerID="e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.002139 4889 scope.go:117] "RemoveContainer" containerID="51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896" Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.002403 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.019988 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.039401 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.047202 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.059142 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.074892 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.096338 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.099507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.099589 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.099616 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.099647 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.099670 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.113210 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.125471 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.141931 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.161365 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.176665 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.195376 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.202124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.202168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.202180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.202200 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.202240 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.210244 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.222272 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.222383 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.222413 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.222436 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.222466 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.222468 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222503 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:07:06.222468798 +0000 UTC m=+52.187133819 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222665 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222700 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222545 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222723 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222613 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222781 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:06.222757637 +0000 UTC m=+52.187422668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222792 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222807 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222653 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222806 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:06.222793428 +0000 UTC m=+52.187458529 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222892 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:06.222876381 +0000 UTC m=+52.187541412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.222913 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:06.222903392 +0000 UTC m=+52.187568423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.236108 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.253004 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:47Z\\\",\\\"message\\\":\\\" 00:06:47.001286 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:06:47.001305 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:06:47.001311 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:06:47.001331 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:06:47.001354 6212 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:06:47.001364 6212 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:06:47.001387 6212 factory.go:656] Stopping watch factory\\\\nI0219 00:06:47.001412 6212 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:06:47.001511 6212 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:06:47.001514 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:06:47.001524 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:06:47.001527 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:06:47.001529 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:06:47.001560 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:06:47.001547 6212 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:06:47.001550 6212 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.305018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.305438 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.305505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.305582 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.305642 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.408499 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.408727 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.408793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.408874 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.408954 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.512712 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.512754 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.512767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.512783 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.512792 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.615453 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.615503 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.615514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.615532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.615546 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.718072 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.718158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.718185 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.718257 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.718285 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.745539 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:02:48.745611302 +0000 UTC Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.821562 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.821597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.821606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.821619 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.821629 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.827938 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sw97l"] Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.828588 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:50 crc kubenswrapper[4889]: E0219 00:06:50.828683 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.851329 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.872137 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.893964 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.911871 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.923399 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.923457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.923476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.923499 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.923519 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:50Z","lastTransitionTime":"2026-02-19T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.932307 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.932379 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fcrh\" (UniqueName: \"kubernetes.io/projected/66e9b544-b66c-43d6-8d8d-d6231a70a6be-kube-api-access-2fcrh\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.933311 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.966756 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:50 crc kubenswrapper[4889]: I0219 00:06:50.994964 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:50Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.006347 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/1.log" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.010378 4889 scope.go:117] "RemoveContainer" containerID="51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.010710 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.010969 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" event={"ID":"9c2d69f3-4487-470a-b1e9-489e05f244eb","Type":"ContainerStarted","Data":"017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.011004 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" event={"ID":"9c2d69f3-4487-470a-b1e9-489e05f244eb","Type":"ContainerStarted","Data":"5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.011016 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" event={"ID":"9c2d69f3-4487-470a-b1e9-489e05f244eb","Type":"ContainerStarted","Data":"a7037627b765934a4fdbd2efe77ab27ab55a02f0c6e1aba3c502e6340bc639e1"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.015304 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.025853 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.025886 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.025895 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.025909 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.025919 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.033500 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fcrh\" (UniqueName: \"kubernetes.io/projected/66e9b544-b66c-43d6-8d8d-d6231a70a6be-kube-api-access-2fcrh\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.033571 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.033677 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.033733 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:06:51.533717966 +0000 UTC m=+37.498382967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.037300 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.051564 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.058515 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fcrh\" (UniqueName: \"kubernetes.io/projected/66e9b544-b66c-43d6-8d8d-d6231a70a6be-kube-api-access-2fcrh\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.077291 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.090294 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.104094 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.118006 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.128975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.129013 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.129024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.129041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.129050 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.142205 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e9a821d9cb0797887613f538165280958825eeb302dc2dba63d063bbe7d9bda2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:47Z\\\",\\\"message\\\":\\\" 00:06:47.001286 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:06:47.001305 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:06:47.001311 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:06:47.001331 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:06:47.001354 6212 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 00:06:47.001364 6212 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 00:06:47.001387 6212 factory.go:656] Stopping watch factory\\\\nI0219 00:06:47.001412 6212 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0219 00:06:47.001511 6212 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:06:47.001514 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:06:47.001524 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:06:47.001527 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:06:47.001529 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:06:47.001560 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:06:47.001547 6212 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0219 00:06:47.001550 6212 handler.go:208] Removed *v1.NetworkPolicy ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.157663 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.178793 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.192732 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.204710 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.216402 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.230571 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.232653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.232718 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.232741 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.232774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.232803 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.249498 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.262377 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.274039 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.288471 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.299589 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.299644 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.299654 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.299672 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.299683 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.308814 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.326734 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.327259 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.331807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.331851 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.331868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.331895 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.331912 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.345286 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.347566 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.351568 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.351613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.351631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.351653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.351670 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.367901 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.370794 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.374827 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.374879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.374898 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.374923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.374942 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.382051 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.390633 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.395165 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.395214 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.395254 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.395276 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.395288 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.402350 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.409899 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.410146 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.411834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.411895 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.411921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.411952 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.411977 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.418568 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.516123 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.516717 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.516806 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.516895 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.516959 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.539079 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.539298 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.539383 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:06:52.53936454 +0000 UTC m=+38.504029541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.620365 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.620827 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.620939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.621031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.621110 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.723487 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.723541 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.723559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.723580 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.723594 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.723943 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.724047 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.724113 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.724169 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.724244 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:51 crc kubenswrapper[4889]: E0219 00:06:51.724314 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.746447 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 10:17:48.102434231 +0000 UTC Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.826642 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.826690 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.826701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.826718 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.826730 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.928776 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.929015 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.929178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.929416 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:51 crc kubenswrapper[4889]: I0219 00:06:51.929615 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:51Z","lastTransitionTime":"2026-02-19T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.032916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.033165 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.033250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.033315 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.033411 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.136281 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.136640 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.136842 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.137027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.137199 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.239707 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.239762 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.239772 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.239793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.239804 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.342147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.342246 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.342264 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.342288 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.342304 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.445313 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.445377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.445394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.445419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.445439 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.548374 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.548431 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.548445 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.548466 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.548481 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.561395 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:52 crc kubenswrapper[4889]: E0219 00:06:52.561659 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:52 crc kubenswrapper[4889]: E0219 00:06:52.561775 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:06:54.561748084 +0000 UTC m=+40.526413105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.651736 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.651790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.651811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.651850 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.651865 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.724909 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:52 crc kubenswrapper[4889]: E0219 00:06:52.725107 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.747041 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:37:20.511922293 +0000 UTC Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.754202 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.754338 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.754366 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.754400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.754425 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.857717 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.857781 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.857805 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.857834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.857855 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.961474 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.961545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.961566 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.961593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:52 crc kubenswrapper[4889]: I0219 00:06:52.961610 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:52Z","lastTransitionTime":"2026-02-19T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.066164 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.066519 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.066621 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.066723 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.066806 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.169878 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.169943 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.169956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.169976 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.169986 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.273014 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.273055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.273067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.273085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.273097 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.375991 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.376052 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.376071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.376098 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.376117 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.480072 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.480359 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.480384 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.480417 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.480437 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.583818 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.583871 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.583886 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.583909 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.583927 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.687861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.687906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.687920 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.687938 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.687950 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.724960 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:53 crc kubenswrapper[4889]: E0219 00:06:53.725147 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.725004 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:53 crc kubenswrapper[4889]: E0219 00:06:53.725284 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.724977 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:53 crc kubenswrapper[4889]: E0219 00:06:53.725375 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.747869 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:35:19.831144069 +0000 UTC Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.791413 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.791539 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.791563 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.791630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.791652 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.894807 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.894849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.894860 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.894876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.894887 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.999114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.999155 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.999165 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.999182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:53 crc kubenswrapper[4889]: I0219 00:06:53.999194 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:53Z","lastTransitionTime":"2026-02-19T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.102345 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.102407 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.102427 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.102448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.102461 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.204799 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.204846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.204859 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.204877 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.204892 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.307850 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.307915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.307935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.307963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.307983 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.410744 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.410776 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.410784 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.410797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.410805 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.514172 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.514666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.514945 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.515146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.515376 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.606551 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:54 crc kubenswrapper[4889]: E0219 00:06:54.606815 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:54 crc kubenswrapper[4889]: E0219 00:06:54.606924 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:06:58.606893985 +0000 UTC m=+44.571558986 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.619753 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.619815 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.619826 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.619846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.619859 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.722649 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.722703 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.722718 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.722738 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.722754 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.725154 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:54 crc kubenswrapper[4889]: E0219 00:06:54.725355 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.746785 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.748161 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:53:05.596130885 +0000 UTC Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.762288 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.774846 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.787798 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.804458 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.820539 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.825678 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.825732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.825743 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.825781 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.825796 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.834295 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.849436 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.862785 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.876035 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.891316 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.918365 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.920140 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.929748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.929797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.929813 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.929838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.929858 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:54Z","lastTransitionTime":"2026-02-19T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.932585 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.948004 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.962969 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.978790 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:54 crc kubenswrapper[4889]: I0219 00:06:54.992266 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:54Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.007423 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.019268 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.032131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.032208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.032274 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.032298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.032314 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.034807 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.046640 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.055896 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.071191 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.086406 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.097484 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.108892 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.132912 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.134819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.134874 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.134887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.134911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.134926 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.148282 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.161706 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.175130 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.193027 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.210784 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.237628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.237651 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.237660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.237674 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.237684 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.340025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.340068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.340076 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.340091 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.340100 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.443543 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.443612 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.443634 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.443664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.443686 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.546436 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.546521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.546538 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.546558 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.546573 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.649664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.649746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.649764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.649797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.649819 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.724271 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.724405 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:55 crc kubenswrapper[4889]: E0219 00:06:55.724691 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.724732 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:55 crc kubenswrapper[4889]: E0219 00:06:55.724831 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:55 crc kubenswrapper[4889]: E0219 00:06:55.724888 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.749253 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:38:40.810421057 +0000 UTC Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.753119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.753213 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.753316 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.753346 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.753365 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.856883 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.856938 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.856947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.856968 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.856979 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.960167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.960310 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.960336 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.960367 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:55 crc kubenswrapper[4889]: I0219 00:06:55.960390 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:55Z","lastTransitionTime":"2026-02-19T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.064067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.064131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.064150 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.064174 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.064197 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.167928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.167999 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.168014 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.168033 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.168047 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.271851 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.271915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.271928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.271949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.271960 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.375195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.375572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.375713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.375844 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.375971 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.479567 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.479899 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.480098 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.480334 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.480518 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.583616 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.583673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.583683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.583702 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.583714 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.685732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.685790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.685802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.685830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.685845 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.728313 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:56 crc kubenswrapper[4889]: E0219 00:06:56.728486 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.752378 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 00:30:48.437674816 +0000 UTC Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.789066 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.789123 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.789142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.789168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.789192 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.893050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.893100 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.893121 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.893178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.893194 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.996714 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.996770 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.996787 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.996810 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:56 crc kubenswrapper[4889]: I0219 00:06:56.996826 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:56Z","lastTransitionTime":"2026-02-19T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.100206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.100672 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.100759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.100828 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.100900 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.203800 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.204224 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.204318 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.204427 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.204492 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.307765 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.307842 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.307861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.307888 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.307906 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.411835 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.411907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.411932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.411967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.411991 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.514969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.515036 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.515060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.515090 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.515113 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.618500 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.618581 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.618597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.618627 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.618663 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.722699 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.722777 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.722790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.722811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.722826 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.723989 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.724075 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.723994 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:57 crc kubenswrapper[4889]: E0219 00:06:57.724172 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:57 crc kubenswrapper[4889]: E0219 00:06:57.724322 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:57 crc kubenswrapper[4889]: E0219 00:06:57.724456 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.752554 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 19:21:40.838968009 +0000 UTC Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.825552 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.825639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.825658 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.825693 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.825712 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.936582 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.936650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.936663 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.936682 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:57 crc kubenswrapper[4889]: I0219 00:06:57.936694 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:57Z","lastTransitionTime":"2026-02-19T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.039571 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.039620 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.039636 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.039662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.039681 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.143115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.143181 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.143203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.143270 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.143294 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.246050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.246114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.246137 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.246167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.246187 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.349105 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.349178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.349192 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.349221 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.349270 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.452618 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.452689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.452711 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.452748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.452784 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.556648 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.556697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.556711 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.556733 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.556748 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.660381 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.660432 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.660445 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.660466 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.660483 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.674162 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:58 crc kubenswrapper[4889]: E0219 00:06:58.674389 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:58 crc kubenswrapper[4889]: E0219 00:06:58.674495 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:07:06.674473683 +0000 UTC m=+52.639138674 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.724327 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:06:58 crc kubenswrapper[4889]: E0219 00:06:58.724490 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.753484 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:56:07.916850763 +0000 UTC Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.764714 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.764764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.764773 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.764793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.764808 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.868353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.868418 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.868428 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.868448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.868460 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.970952 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.970995 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.971005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.971020 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:58 crc kubenswrapper[4889]: I0219 00:06:58.971033 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:58Z","lastTransitionTime":"2026-02-19T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.074247 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.074751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.074802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.074835 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.074855 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.177156 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.177194 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.177203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.177235 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.177246 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.279568 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.279609 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.279621 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.279644 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.279658 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.382766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.382816 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.382830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.382852 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.382867 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.485534 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.485584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.485599 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.485624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.485641 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.588592 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.588681 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.588694 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.588718 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.588731 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.692120 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.692162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.692170 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.692189 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.692199 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.724817 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.724887 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.724958 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:06:59 crc kubenswrapper[4889]: E0219 00:06:59.725010 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:06:59 crc kubenswrapper[4889]: E0219 00:06:59.725203 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:06:59 crc kubenswrapper[4889]: E0219 00:06:59.725353 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.754675 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:53:17.132385715 +0000 UTC Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.795198 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.795266 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.795279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.795301 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.795321 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.897818 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.897872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.897883 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.897906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:06:59 crc kubenswrapper[4889]: I0219 00:06:59.897922 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:06:59Z","lastTransitionTime":"2026-02-19T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.001441 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.001496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.001505 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.001525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.001539 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.104876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.104933 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.104946 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.104994 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.105009 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.208107 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.208174 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.208187 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.208250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.208265 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.310782 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.310820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.310834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.310854 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.310869 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.414320 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.414385 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.414398 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.414420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.414437 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.518079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.518128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.518139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.518158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.518171 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.621126 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.621170 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.621181 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.621195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.621207 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.723772 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.723801 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.723809 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.723822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.724137 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:00 crc kubenswrapper[4889]: E0219 00:07:00.724278 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.724460 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.755655 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:59:16.283685524 +0000 UTC Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.828052 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.828101 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.828114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.828134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.828146 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.930959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.931009 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.931021 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.931039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:00 crc kubenswrapper[4889]: I0219 00:07:00.931051 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:00Z","lastTransitionTime":"2026-02-19T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.034097 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.034159 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.034173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.034202 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.034257 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.137143 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.137211 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.137282 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.137338 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.137380 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.240468 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.240530 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.240545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.240568 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.240579 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.344512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.344694 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.344714 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.344778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.344795 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.447435 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.447543 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.447553 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.447596 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.447607 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.550196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.550261 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.550275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.550299 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.550314 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.654565 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.654627 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.654662 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.654680 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.654692 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.724523 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.724528 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.724729 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.724827 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.724550 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.725040 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.755851 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:25:26.317986416 +0000 UTC Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.757377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.757451 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.757488 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.757518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.757540 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.780775 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.780819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.780830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.780848 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.780860 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.792200 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.795481 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.795518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.795535 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.795552 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.795564 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.808973 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.811661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.811697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.811706 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.811721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.811731 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.822590 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.827020 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.827054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.827063 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.827083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.827099 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.841310 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.845495 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.845528 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.845537 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.845550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.845560 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.858051 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:01Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:01 crc kubenswrapper[4889]: E0219 00:07:01.858166 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.859906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.859947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.859958 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.859974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.859986 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.962482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.962560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.962572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.962593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:01 crc kubenswrapper[4889]: I0219 00:07:01.962606 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:01Z","lastTransitionTime":"2026-02-19T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.065613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.065731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.065743 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.065758 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.065771 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.167887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.167927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.167936 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.167949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.167959 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.270785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.270840 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.270855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.270878 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.270892 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.373462 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.373503 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.373512 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.373527 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.373538 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.476255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.476298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.476307 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.476322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.476333 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.579494 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.579559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.579572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.579591 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.579605 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.681783 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.681819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.681831 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.681846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.681857 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.724540 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:02 crc kubenswrapper[4889]: E0219 00:07:02.724658 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.756564 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 19:17:31.491894881 +0000 UTC Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.784342 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.784372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.784384 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.784400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.784411 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.887796 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.887883 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.887907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.887938 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.887975 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.991769 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.991852 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.991875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.991904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:02 crc kubenswrapper[4889]: I0219 00:07:02.991929 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:02Z","lastTransitionTime":"2026-02-19T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.094688 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.094750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.094764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.094784 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.094798 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.197630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.197677 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.197689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.197706 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.197718 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.300732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.300794 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.300805 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.300824 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.300836 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.403704 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.403752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.403767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.403785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.403798 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.508707 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.508752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.508762 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.508781 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.508794 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.611832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.611883 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.611894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.611914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.611927 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.715587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.715668 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.715697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.715726 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.715746 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.724490 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.724634 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:03 crc kubenswrapper[4889]: E0219 00:07:03.724751 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.724490 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:03 crc kubenswrapper[4889]: E0219 00:07:03.724931 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:03 crc kubenswrapper[4889]: E0219 00:07:03.725067 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.757568 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:42:21.4517793 +0000 UTC Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.818744 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.818808 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.818820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.818842 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.818855 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.921756 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.921834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.921871 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.921903 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:03 crc kubenswrapper[4889]: I0219 00:07:03.921927 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:03Z","lastTransitionTime":"2026-02-19T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.025588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.025618 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.025626 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.025639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.025659 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.129128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.129175 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.129187 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.129206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.129242 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.232811 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.232859 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.232868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.232881 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.232890 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.335419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.335463 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.335474 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.335507 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.335517 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.438295 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.438363 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.438372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.438389 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.438425 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.541420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.541475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.541484 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.541499 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.541508 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.645483 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.645583 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.645597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.645622 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.645638 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.724945 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:04 crc kubenswrapper[4889]: E0219 00:07:04.725150 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.726102 4889 scope.go:117] "RemoveContainer" containerID="51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.748774 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.755501 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.755560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.755574 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.755596 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.755610 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.758720 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:30:45.460888344 +0000 UTC Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.767765 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.787167 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.804309 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.826881 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.842401 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.856423 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.860106 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.860147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.860157 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.860175 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.860185 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.871928 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.886514 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.902028 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.919519 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.935211 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.947535 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.961065 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.962343 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.962519 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.962576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.962603 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.962642 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:04Z","lastTransitionTime":"2026-02-19T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.977603 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:04 crc kubenswrapper[4889]: I0219 00:07:04.989779 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.060590 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/1.log" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.074639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.074683 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.074694 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.074714 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.074729 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.077047 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.077671 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.095762 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.111474 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.128062 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.145132 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.171455 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.177001 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.177028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.177039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.177055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.177067 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.185017 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.202408 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.220978 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.237230 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.250815 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.276058 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.279238 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.279265 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.279276 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.279291 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.279302 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.302563 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.314442 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.327500 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.342277 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.355660 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.381290 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.381340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.381357 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.381372 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.381383 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.484348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.484793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.484868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.484942 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.485003 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.587429 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.587882 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.587959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.588035 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.588097 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.690691 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.690730 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.690742 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.690757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.690772 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.724740 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:05 crc kubenswrapper[4889]: E0219 00:07:05.724883 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.725285 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:05 crc kubenswrapper[4889]: E0219 00:07:05.725353 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.725406 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:05 crc kubenswrapper[4889]: E0219 00:07:05.725458 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.759486 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:58:55.338577039 +0000 UTC Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.793396 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.793425 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.793435 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.793451 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.793460 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.897043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.897087 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.897098 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.897116 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:05 crc kubenswrapper[4889]: I0219 00:07:05.897127 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:05Z","lastTransitionTime":"2026-02-19T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.000552 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.000620 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.000638 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.000670 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.000695 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.085507 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/2.log" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.086536 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/1.log" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.091370 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4" exitCode=1 Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.091452 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.091639 4889 scope.go:117] "RemoveContainer" containerID="51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.092196 4889 scope.go:117] "RemoveContainer" containerID="491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4" Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.092401 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.103467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.103518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.103534 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.103556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.103573 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.117502 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.135532 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.154731 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.169120 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.186247 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.205128 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.206366 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.206408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.206424 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.206443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.206457 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.218975 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.236422 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.250942 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.260776 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.260934 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.261009 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.261050 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.261091 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261141 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:07:38.261097534 +0000 UTC m=+84.225762565 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261167 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261192 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261277 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261286 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261309 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261321 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:38.26130381 +0000 UTC m=+84.225968841 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261354 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:38.261341931 +0000 UTC m=+84.226006962 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261381 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:38.261368382 +0000 UTC m=+84.226033413 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261388 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261434 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261451 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.261535 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:07:38.261509847 +0000 UTC m=+84.226174898 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.266793 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.282770 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.306620 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://51774c2a37ca4ce2010552622175f3b823e98a4e026614e71256358cd9969896\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"message\\\":\\\" *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI0219 00:06:49.323568 6354 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0219 00:06:49.323572 6354 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nF0219 00:06:49.323570 6354 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:06:49Z is after 2025-08-24T17:21:41Z]\\\\nI0219 00:06:49.323578 6354 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0219 00:06:49.323586 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.309190 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.309292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.309312 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.309337 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.309358 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.321939 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.339297 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.359086 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.377536 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.411904 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.411974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.411993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.412021 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.412047 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.515125 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.515167 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.515178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.515199 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.515248 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.617771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.617816 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.617829 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.617847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.617859 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.720594 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.720628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.720639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.720654 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.720667 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.724712 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.724958 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.760576 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:27:14.912744299 +0000 UTC Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.766344 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.766478 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: E0219 00:07:06.766551 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:07:22.76653352 +0000 UTC m=+68.731198521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.823798 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.823845 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.823857 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.823878 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.823895 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.927656 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.927700 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.927713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.927735 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:06 crc kubenswrapper[4889]: I0219 00:07:06.927751 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:06Z","lastTransitionTime":"2026-02-19T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.030834 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.030883 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.030898 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.030924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.030942 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.099811 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/2.log" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.104591 4889 scope.go:117] "RemoveContainer" containerID="491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4" Feb 19 00:07:07 crc kubenswrapper[4889]: E0219 00:07:07.104848 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.122881 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.133607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.133648 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.133656 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.133675 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.133699 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.136653 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.151005 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.165824 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.188414 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.202150 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.215795 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.237119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.237076 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.237155 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.237171 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.237194 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.237207 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.253100 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.268114 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.283645 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.297759 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.312917 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.330848 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.340117 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.340158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.340170 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.340192 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.340205 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.344579 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.359708 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:07Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.442983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.443039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.443050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.443071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.443083 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.546303 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.546358 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.546369 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.546387 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.546402 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.649008 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.649065 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.649080 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.649102 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.649123 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.724651 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.724748 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.724675 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:07 crc kubenswrapper[4889]: E0219 00:07:07.724930 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:07 crc kubenswrapper[4889]: E0219 00:07:07.725093 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:07 crc kubenswrapper[4889]: E0219 00:07:07.725208 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.752276 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.752351 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.752369 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.752396 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.752415 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.761462 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:23:48.105184304 +0000 UTC Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.854749 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.854822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.854844 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.854876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.854900 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.958317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.958368 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.958385 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.958408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:07 crc kubenswrapper[4889]: I0219 00:07:07.958426 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:07Z","lastTransitionTime":"2026-02-19T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.061346 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.061417 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.061436 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.061460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.061481 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.165209 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.165300 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.165329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.165359 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.165382 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.268837 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.268926 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.268937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.268956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.268969 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.371868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.371910 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.371926 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.371945 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.371956 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.476850 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.476925 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.476939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.476975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.476989 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.579524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.579572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.579586 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.579608 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.579625 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.681950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.681997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.682008 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.682029 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.682042 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.724718 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:08 crc kubenswrapper[4889]: E0219 00:07:08.725007 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.762381 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 08:50:45.069071869 +0000 UTC Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.785327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.785386 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.785397 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.785423 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.785440 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.888100 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.888142 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.888152 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.888171 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.888185 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.991350 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.991391 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.991400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.991416 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:08 crc kubenswrapper[4889]: I0219 00:07:08.991427 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:08Z","lastTransitionTime":"2026-02-19T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.094708 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.094761 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.094775 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.094800 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.094815 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.198540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.198599 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.198609 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.198628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.198640 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.301901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.301952 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.301963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.301983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.301997 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.404415 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.404467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.404493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.404515 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.404539 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.506907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.506971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.506986 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.507010 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.507021 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.610052 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.610112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.610130 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.610171 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.610189 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.713652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.713712 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.713724 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.713746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.713758 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.724868 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.724965 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.724875 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:09 crc kubenswrapper[4889]: E0219 00:07:09.725019 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:09 crc kubenswrapper[4889]: E0219 00:07:09.725126 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:09 crc kubenswrapper[4889]: E0219 00:07:09.725176 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.763402 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:22:46.279384354 +0000 UTC Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.817136 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.817181 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.817191 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.817208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.817233 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.920657 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.920702 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.920711 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.920731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:09 crc kubenswrapper[4889]: I0219 00:07:09.920742 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:09Z","lastTransitionTime":"2026-02-19T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.024079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.024119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.024128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.024143 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.024153 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.126195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.126249 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.126257 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.126275 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.126287 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.149575 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.162334 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.167075 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.185995 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.201049 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.220535 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.228821 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.228875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.228891 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.228916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.228929 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.234577 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.246010 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.259506 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.273507 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.293993 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.305917 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.325847 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.332578 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.332698 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.332722 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.332757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.332783 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.337022 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.350255 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.365921 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.380019 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.397098 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:10Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.435738 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.436101 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.436273 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.436448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.436540 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.539975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.540031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.540042 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.540061 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.540074 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.643180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.643272 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.643284 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.643303 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.643318 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.725110 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:10 crc kubenswrapper[4889]: E0219 00:07:10.725276 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.746533 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.746592 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.746602 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.746623 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.746634 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.764306 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:25:08.334141914 +0000 UTC Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.849182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.849233 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.849242 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.849258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.849271 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.951716 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.951784 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.951797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.951815 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:10 crc kubenswrapper[4889]: I0219 00:07:10.951828 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:10Z","lastTransitionTime":"2026-02-19T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.055040 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.055078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.055087 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.055102 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.055113 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.158579 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.158628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.158648 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.158671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.158688 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.261007 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.261036 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.261043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.261056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.261064 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.364028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.364087 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.364108 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.364134 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.364156 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.467767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.467858 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.467886 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.467919 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.467943 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.570732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.570776 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.570789 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.570809 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.570823 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.674055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.674105 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.674119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.674139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.674151 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.724282 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.724284 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:11 crc kubenswrapper[4889]: E0219 00:07:11.724468 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.724306 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:11 crc kubenswrapper[4889]: E0219 00:07:11.724534 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:11 crc kubenswrapper[4889]: E0219 00:07:11.724773 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.765300 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 23:08:26.11612302 +0000 UTC Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.776247 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.776291 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.776303 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.776321 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.776333 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.879574 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.879623 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.879634 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.879649 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.879660 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.982544 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.982595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.982610 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.982629 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:11 crc kubenswrapper[4889]: I0219 00:07:11.982643 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:11Z","lastTransitionTime":"2026-02-19T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.013660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.013742 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.013774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.013803 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.013827 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.031742 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.037089 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.037119 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.037131 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.037146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.037160 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.057483 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.066722 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.066788 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.066808 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.066843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.066863 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.086550 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.091929 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.091984 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.091997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.092019 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.092034 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.112188 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.118388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.118444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.118459 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.118484 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.118498 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.142250 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:12Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.142596 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.145863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.145911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.145924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.145945 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.145959 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.249026 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.249083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.249096 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.249115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.249131 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.352296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.352371 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.352385 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.352408 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.352422 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.454670 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.454722 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.454731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.454749 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.454763 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.557093 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.557157 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.557170 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.557184 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.557192 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.659995 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.660065 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.660077 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.660101 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.660116 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.724595 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:12 crc kubenswrapper[4889]: E0219 00:07:12.724839 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.762638 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.762678 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.762691 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.762717 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.762729 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.765512 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:17:05.298001462 +0000 UTC Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.866055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.866110 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.866122 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.866139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.866151 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.969559 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.969598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.969610 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.969630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:12 crc kubenswrapper[4889]: I0219 00:07:12.969645 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:12Z","lastTransitionTime":"2026-02-19T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.073018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.073114 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.073126 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.073151 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.073169 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.176489 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.176593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.176607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.176666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.176688 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.280635 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.280697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.280710 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.280733 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.280747 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.384083 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.384138 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.384149 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.384170 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.384184 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.487154 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.487209 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.487256 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.487279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.487291 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.591747 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.591822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.591833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.591854 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.591867 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.694661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.694721 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.694732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.694749 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.694760 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.724052 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.724127 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.724137 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:13 crc kubenswrapper[4889]: E0219 00:07:13.724696 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:13 crc kubenswrapper[4889]: E0219 00:07:13.724638 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:13 crc kubenswrapper[4889]: E0219 00:07:13.724819 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.765822 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:03:33.669286163 +0000 UTC Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.798226 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.798271 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.798281 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.798296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.798309 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.901285 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.901361 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.901383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.901418 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:13 crc kubenswrapper[4889]: I0219 00:07:13.901441 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:13Z","lastTransitionTime":"2026-02-19T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.004522 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.004575 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.004587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.004607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.004621 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.108042 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.109413 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.110115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.110974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.111683 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.215553 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.215608 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.215624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.215650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.215669 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.318890 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.318939 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.318950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.318971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.318983 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.423332 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.423482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.424278 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.424357 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.424388 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.528395 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.528472 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.528485 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.528502 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.528514 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.631653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.632000 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.632118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.632271 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.632404 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.724080 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:14 crc kubenswrapper[4889]: E0219 00:07:14.724285 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.736588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.737374 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.737400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.737436 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.737466 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.745493 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.764936 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.765923 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:45:00.60173406 +0000 UTC Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.780433 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.803716 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.822647 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.837797 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.840756 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.840819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.840835 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.840861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.840904 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.854728 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.871185 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.890857 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.904649 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.915471 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.929115 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.942844 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.943525 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.943573 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.943584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.943603 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.943613 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:14Z","lastTransitionTime":"2026-02-19T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.957775 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.971550 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.990846 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:14 crc kubenswrapper[4889]: I0219 00:07:14.999612 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.046299 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.046606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.046681 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.046766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.046848 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.148980 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.149038 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.149056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.149081 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.149100 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.252597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.252701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.252726 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.252756 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.252779 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.356522 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.356595 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.356618 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.356646 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.356665 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.460660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.460736 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.460756 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.460787 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.460810 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.564400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.564464 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.564483 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.564514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.564538 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.667613 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.667661 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.667673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.667690 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.667705 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.724333 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.724381 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.724465 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:15 crc kubenswrapper[4889]: E0219 00:07:15.724567 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:15 crc kubenswrapper[4889]: E0219 00:07:15.724691 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:15 crc kubenswrapper[4889]: E0219 00:07:15.724840 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.767117 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 12:38:08.570695845 +0000 UTC Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.770410 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.770482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.770508 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.770540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.770565 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.872617 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.872673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.872685 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.872705 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.872719 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.975250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.975428 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.975452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.975476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:15 crc kubenswrapper[4889]: I0219 00:07:15.975494 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:15Z","lastTransitionTime":"2026-02-19T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.078737 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.078823 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.078841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.078867 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.078885 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.182475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.182540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.182558 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.182584 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.182607 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.285624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.285675 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.285692 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.285715 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.285750 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.388423 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.388853 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.389045 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.389287 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.389556 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.492759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.492794 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.492804 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.492819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.492832 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.594963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.595009 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.595018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.595030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.595038 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.697929 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.698351 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.698543 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.698718 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.698878 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.724524 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:16 crc kubenswrapper[4889]: E0219 00:07:16.724710 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.768285 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:01:06.477554704 +0000 UTC Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.801731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.802080 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.802192 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.802332 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.802443 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.905937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.905985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.905999 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.906016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:16 crc kubenswrapper[4889]: I0219 00:07:16.906050 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:16Z","lastTransitionTime":"2026-02-19T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.009084 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.009421 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.009511 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.009599 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.009672 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.112368 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.112406 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.112417 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.112431 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.112443 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.214778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.214830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.214843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.214855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.214866 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.317780 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.317837 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.317864 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.317887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.317906 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.420889 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.420947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.420964 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.420987 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.421004 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.524148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.524255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.524268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.524285 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.524297 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.626667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.626717 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.626730 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.626748 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.626765 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.725021 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.725096 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.725197 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:17 crc kubenswrapper[4889]: E0219 00:07:17.725404 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:17 crc kubenswrapper[4889]: E0219 00:07:17.725610 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:17 crc kubenswrapper[4889]: E0219 00:07:17.725781 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.728916 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.728969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.728987 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.729010 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.729027 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.768801 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:36:04.257728379 +0000 UTC Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.832138 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.832184 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.832203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.832253 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.832273 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.934984 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.936269 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.936674 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.936920 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:17 crc kubenswrapper[4889]: I0219 00:07:17.937104 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:17Z","lastTransitionTime":"2026-02-19T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.040424 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.040497 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.040519 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.040542 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.040559 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.144725 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.145071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.145201 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.145376 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.145502 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.249277 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.249653 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.249803 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.249942 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.250185 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.353578 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.353671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.353706 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.353733 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.353751 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.457164 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.457273 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.457294 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.457322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.457344 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.563151 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.563268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.563301 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.563329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.563347 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.665739 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.665794 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.665812 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.665833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.665850 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.725092 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:18 crc kubenswrapper[4889]: E0219 00:07:18.725923 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.726417 4889 scope.go:117] "RemoveContainer" containerID="491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4" Feb 19 00:07:18 crc kubenswrapper[4889]: E0219 00:07:18.726750 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.768549 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.768593 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.768606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.768621 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.768633 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.769127 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:34:02.177401745 +0000 UTC Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.872016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.872341 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.872443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.872518 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.872589 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.975016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.975082 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.975098 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.975122 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:18 crc kubenswrapper[4889]: I0219 00:07:18.975142 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:18Z","lastTransitionTime":"2026-02-19T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.077377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.077425 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.077440 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.077459 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.077473 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.179622 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.179650 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.179659 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.179671 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.179679 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.281547 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.281576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.281586 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.281598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.281607 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.384493 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.384536 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.384547 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.384563 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.384573 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.486974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.487016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.487030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.487060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.487072 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.589306 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.589351 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.589362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.589379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.589390 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.691553 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.691602 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.691619 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.691643 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.691659 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.724585 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.724612 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.724622 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:19 crc kubenswrapper[4889]: E0219 00:07:19.724749 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:19 crc kubenswrapper[4889]: E0219 00:07:19.724831 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:19 crc kubenswrapper[4889]: E0219 00:07:19.724912 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.769276 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:32:51.915669172 +0000 UTC Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.794335 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.794363 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.794373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.794403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.794414 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.898066 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.898157 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.898169 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.898192 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:19 crc kubenswrapper[4889]: I0219 00:07:19.898206 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:19Z","lastTransitionTime":"2026-02-19T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.001085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.001160 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.001180 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.001211 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.001257 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.104165 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.104228 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.104240 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.104258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.104270 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.207031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.207075 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.207089 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.207106 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.207119 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.309642 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.309681 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.309695 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.309710 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.309721 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.412956 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.413039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.413060 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.413135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.413162 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.515695 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.515728 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.515737 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.515751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.515760 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.618413 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.618450 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.618467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.618488 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.618501 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.721147 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.721195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.721204 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.721240 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.721250 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.724441 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:20 crc kubenswrapper[4889]: E0219 00:07:20.724574 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.770380 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:51:13.937328747 +0000 UTC Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.823952 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.824017 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.824033 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.824054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.824070 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.927757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.927833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.927854 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.927882 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:20 crc kubenswrapper[4889]: I0219 00:07:20.927902 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:20Z","lastTransitionTime":"2026-02-19T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.030411 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.030446 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.030457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.030474 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.030485 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.133406 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.133443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.133450 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.133480 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.133490 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.235734 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.235787 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.235798 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.235822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.235834 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.338689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.338740 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.338750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.338768 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.338778 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.441254 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.441314 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.441333 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.441357 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.441376 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.545045 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.545121 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.545136 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.545155 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.545168 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.648098 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.648172 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.648186 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.648203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.648235 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.724875 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.724939 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.724952 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:21 crc kubenswrapper[4889]: E0219 00:07:21.725088 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:21 crc kubenswrapper[4889]: E0219 00:07:21.725168 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:21 crc kubenswrapper[4889]: E0219 00:07:21.725366 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.751205 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.751267 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.751279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.751296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.751307 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.770838 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:42:41.770167717 +0000 UTC Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.853958 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.854012 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.854026 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.854056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.854076 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.957579 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.957633 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.957646 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.957666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:21 crc kubenswrapper[4889]: I0219 00:07:21.957682 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:21Z","lastTransitionTime":"2026-02-19T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.060503 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.060606 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.060634 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.060665 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.060685 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.164292 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.164322 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.164332 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.164346 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.164379 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.267865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.267932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.267949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.267974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.267993 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.371751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.372184 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.372454 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.372497 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.372662 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.388482 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.388556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.388569 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.388583 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.388593 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.402046 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.406605 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.406658 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.406673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.406692 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.406709 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.423159 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.429042 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.429330 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.429457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.429560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.429669 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.443800 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.449633 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.449676 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.449686 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.449701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.449713 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.461941 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.467494 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.467563 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.467576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.467602 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.467617 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.481959 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.482105 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.484199 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.484325 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.484353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.484394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.484424 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.587552 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.587597 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.587608 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.587624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.587636 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.690868 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.690932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.690950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.690975 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.690993 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.724555 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.724776 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.771458 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:40:28.309796645 +0000 UTC Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.794163 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.794278 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.794315 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.794347 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.794371 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.847337 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.847568 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:07:22 crc kubenswrapper[4889]: E0219 00:07:22.847723 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:07:54.847682145 +0000 UTC m=+100.812347306 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.897385 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.897466 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.897487 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.897510 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:22 crc kubenswrapper[4889]: I0219 00:07:22.897530 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:22Z","lastTransitionTime":"2026-02-19T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.000390 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.000435 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.000444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.000460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.000475 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.103332 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.103379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.103390 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.103403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.103413 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.206090 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.206153 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.206176 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.206277 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.206302 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.308923 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.308959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.308970 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.308985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.308995 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.411190 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.411251 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.411262 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.411279 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.411290 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.514965 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.515006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.515015 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.515031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.515042 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.618110 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.618177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.618189 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.618208 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.618239 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.721673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.721737 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.721771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.721792 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.721804 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.724364 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.724398 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:23 crc kubenswrapper[4889]: E0219 00:07:23.724501 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:23 crc kubenswrapper[4889]: E0219 00:07:23.724699 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.724671 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:23 crc kubenswrapper[4889]: E0219 00:07:23.724860 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.771821 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 20:19:33.864430101 +0000 UTC Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.824989 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.825043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.825053 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.825069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.825078 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.927432 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.927476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.927484 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.927500 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:23 crc kubenswrapper[4889]: I0219 00:07:23.927509 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:23Z","lastTransitionTime":"2026-02-19T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.030835 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.030893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.030903 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.030929 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.030943 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.133400 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.133439 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.133448 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.133461 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.133469 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.236434 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.236486 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.236498 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.236514 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.236527 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.339473 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.339537 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.339551 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.339576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.339590 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.442483 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.442523 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.442532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.442550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.442560 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.545801 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.545876 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.545901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.545933 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.545957 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.649196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.649312 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.649340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.649374 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.649400 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.725110 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:24 crc kubenswrapper[4889]: E0219 00:07:24.726500 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.742447 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.749317 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.753397 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.753456 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.753470 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.753490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.753503 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.766384 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.772633 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:44:57.754002527 +0000 UTC Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.782549 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.799911 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.814724 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.834953 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.851614 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.859168 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.859203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.859229 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.859280 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.859299 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.867840 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.880669 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.898720 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.917068 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.930474 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.945173 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.962080 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.962129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.962139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.962157 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.962169 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:24Z","lastTransitionTime":"2026-02-19T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.962145 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.980013 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:24 crc kubenswrapper[4889]: I0219 00:07:24.999586 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.012156 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.064631 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.064689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.064701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.064722 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.064737 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.165123 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/0.log" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.165255 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerDied","Data":"af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.165752 4889 scope.go:117] "RemoveContainer" containerID="af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.165183 4889 generic.go:334] "Generic (PLEG): container finished" podID="7dcfc583-b6f2-415a-a4f0-adb70f4865c8" containerID="af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec" exitCode=1 Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.167038 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.167064 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.167074 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.167089 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.167100 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.187358 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.204970 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.223381 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.238881 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.252895 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4268dad-36e3-4942-b954-abde3c1450db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f12f59e4f0370d490cf9d4e99785d38f8e115a5c043009088eaa851bc70f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.267049 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.271069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.271124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.271136 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.271158 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.271171 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.284937 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.303858 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.320072 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.340764 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.357344 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.374069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.374118 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.374130 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.374151 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.374160 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.380668 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.394863 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.410853 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.426420 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:24Z\\\",\\\"message\\\":\\\"2026-02-19T00:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78\\\\n2026-02-19T00:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78 to /host/opt/cni/bin/\\\\n2026-02-19T00:06:39Z [verbose] multus-daemon started\\\\n2026-02-19T00:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.453839 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.465254 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.475390 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.477141 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.477178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.477192 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.477212 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.477240 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.579953 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.580200 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.580288 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.580353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.580445 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.682300 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.682335 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.682346 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.682362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.682372 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.724836 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:25 crc kubenswrapper[4889]: E0219 00:07:25.725102 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.724876 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:25 crc kubenswrapper[4889]: E0219 00:07:25.725365 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.724883 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:25 crc kubenswrapper[4889]: E0219 00:07:25.725533 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.773354 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 08:55:59.178766805 +0000 UTC Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.785342 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.785457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.785526 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.785601 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.785669 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.887710 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.887755 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.887766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.887785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.887799 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.990550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.990619 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.990639 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.990667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:25 crc kubenswrapper[4889]: I0219 00:07:25.990686 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:25Z","lastTransitionTime":"2026-02-19T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.093469 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.093521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.093536 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.093552 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.093562 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.171534 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/0.log" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.171624 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerStarted","Data":"8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.186729 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.196608 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.196666 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.196678 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.196701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.196715 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.199649 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4268dad-36e3-4942-b954-abde3c1450db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f12f59e4f0370d490cf9d4e99785d38f8e115a5c043009088eaa851bc70f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.215406 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.232578 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.245403 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.261420 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.277163 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.290622 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.299838 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.299899 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.299912 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.299934 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.299949 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.308211 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.324930 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.340303 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.357127 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:24Z\\\",\\\"message\\\":\\\"2026-02-19T00:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78\\\\n2026-02-19T00:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78 to /host/opt/cni/bin/\\\\n2026-02-19T00:06:39Z [verbose] multus-daemon started\\\\n2026-02-19T00:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.381030 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.396029 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.402626 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.402670 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.402684 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.402708 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.402723 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.412355 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.426933 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.441049 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.456542 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.505333 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.505393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.505404 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.505427 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.505441 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.608980 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.609028 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.609039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.609055 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.609066 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.712318 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.712380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.712398 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.712420 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.712435 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.724656 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:26 crc kubenswrapper[4889]: E0219 00:07:26.724941 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.774389 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:14:25.747580683 +0000 UTC Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.814740 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.814805 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.814816 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.814841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.814855 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.917630 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.917674 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.917684 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.917704 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:26 crc kubenswrapper[4889]: I0219 00:07:26.917721 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:26Z","lastTransitionTime":"2026-02-19T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.020467 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.020531 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.020549 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.020576 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.020593 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.123697 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.123764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.123775 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.123797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.123808 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.226615 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.226727 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.226741 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.226766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.226778 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.329491 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.329572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.329583 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.329607 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.329620 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.432900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.433076 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.433099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.433132 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.433153 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.536443 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.536521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.536545 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.536586 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.536612 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.644836 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.644915 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.644929 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.644950 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.644967 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.724812 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.724848 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.724949 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:27 crc kubenswrapper[4889]: E0219 00:07:27.725005 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:27 crc kubenswrapper[4889]: E0219 00:07:27.725116 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:27 crc kubenswrapper[4889]: E0219 00:07:27.725296 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.748438 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.748664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.748677 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.748701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.748717 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.775116 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:03:35.000843223 +0000 UTC Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.852948 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.852992 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.853006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.853029 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.853050 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.956146 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.956250 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.956287 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.956324 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:27 crc kubenswrapper[4889]: I0219 00:07:27.956349 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:27Z","lastTransitionTime":"2026-02-19T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.060396 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.060473 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.060494 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.060523 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.060543 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.163932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.163971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.163980 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.164001 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.164012 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.267717 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.267770 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.267785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.267804 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.268095 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.371043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.371097 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.371112 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.371137 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.371148 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.473983 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.474027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.474042 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.474061 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.474071 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.577444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.577517 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.577529 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.577555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.577569 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.680692 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.680736 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.680747 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.680767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.680780 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.724543 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:28 crc kubenswrapper[4889]: E0219 00:07:28.724786 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.775800 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 23:33:29.611017142 +0000 UTC Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.783625 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.783689 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.783705 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.783732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.783747 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.886944 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.886997 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.887014 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.887040 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.887055 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.990269 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.990344 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.990360 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.990381 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:28 crc kubenswrapper[4889]: I0219 00:07:28.990392 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:28Z","lastTransitionTime":"2026-02-19T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.093145 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.093181 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.093191 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.093210 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.093278 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.195909 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.195974 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.195996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.196027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.196048 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.298305 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.298340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.298350 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.298365 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.298377 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.400985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.401027 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.401036 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.401054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.401063 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.503743 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.503791 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.503806 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.503822 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.503834 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.606849 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.606913 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.606928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.606959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.606978 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.710887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.710951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.710963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.710985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.710998 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.724191 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.724261 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.724319 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:29 crc kubenswrapper[4889]: E0219 00:07:29.724414 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:29 crc kubenswrapper[4889]: E0219 00:07:29.724550 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:29 crc kubenswrapper[4889]: E0219 00:07:29.724699 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.775991 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:42:12.049743724 +0000 UTC Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.813562 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.813624 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.813637 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.813660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.813675 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.916766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.916820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.916833 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.916855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:29 crc kubenswrapper[4889]: I0219 00:07:29.916868 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:29Z","lastTransitionTime":"2026-02-19T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.019685 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.019732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.019745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.019762 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.019774 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.122673 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.122736 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.122750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.122776 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.122792 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.226863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.226926 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.226941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.226967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.226983 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.329713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.329778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.329790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.329812 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.329849 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.432526 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.432598 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.432611 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.432700 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.432714 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.535362 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.535442 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.535473 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.535491 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.535504 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.638767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.638830 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.638847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.638872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.638889 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.725064 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:30 crc kubenswrapper[4889]: E0219 00:07:30.725859 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.726434 4889 scope.go:117] "RemoveContainer" containerID="491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.742973 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.743548 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.743788 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.744052 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.744261 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.776754 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 05:38:16.029758384 +0000 UTC Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.847029 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.847089 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.847107 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.847132 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.847150 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.950154 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.950196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.950268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.950298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:30 crc kubenswrapper[4889]: I0219 00:07:30.950320 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:30Z","lastTransitionTime":"2026-02-19T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.053490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.053539 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.053561 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.053588 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.053606 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.156316 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.156373 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.156390 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.156413 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.156435 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.193171 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/2.log" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.195747 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.196258 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.213036 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.228819 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.245409 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.258586 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.258628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.258640 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.258655 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.258669 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.261504 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.272408 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.285050 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.301514 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.316696 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.330766 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.344569 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4268dad-36e3-4942-b954-abde3c1450db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f12f59e4f0370d490cf9d4e99785d38f8e115a5c043009088eaa851bc70f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.361258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.361340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.361365 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.361393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.361416 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.362750 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.385460 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.399245 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.417554 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:24Z\\\",\\\"message\\\":\\\"2026-02-19T00:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78\\\\n2026-02-19T00:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78 to /host/opt/cni/bin/\\\\n2026-02-19T00:06:39Z [verbose] multus-daemon started\\\\n2026-02-19T00:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.441442 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.452460 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.464684 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.464750 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.464764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.464792 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.464806 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.466062 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.481477 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.567963 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.568012 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.568025 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.568043 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.568054 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.670861 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.670907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.670918 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.670935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.670947 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.724331 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.724331 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:31 crc kubenswrapper[4889]: E0219 00:07:31.724973 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.724388 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:31 crc kubenswrapper[4889]: E0219 00:07:31.725189 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:31 crc kubenswrapper[4889]: E0219 00:07:31.725014 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.774348 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.774393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.774412 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.774436 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.774455 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.777561 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 16:17:56.204487908 +0000 UTC Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.878664 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.878729 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.878745 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.878767 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.878779 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.982355 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.982414 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.982430 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.982454 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:31 crc kubenswrapper[4889]: I0219 00:07:31.982472 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:31Z","lastTransitionTime":"2026-02-19T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.085855 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.085900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.085910 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.085927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.085940 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.188562 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.188632 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.188655 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.188681 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.188701 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.202193 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/3.log" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.203043 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/2.log" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.206320 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" exitCode=1 Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.206374 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.206421 4889 scope.go:117] "RemoveContainer" containerID="491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.207500 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.207798 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.227484 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.245251 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.261050 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.273136 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.288498 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.292832 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.292879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.292914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.292937 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.292953 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.299759 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.309850 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.319692 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4268dad-36e3-4942-b954-abde3c1450db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f12f59e4f0370d490cf9d4e99785d38f8e115a5c043009088eaa851bc70f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.341912 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.360940 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.374135 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.386878 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.394928 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.394967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.394978 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.394994 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.395007 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.408499 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://491dad4ee13d47283cf1bfc10dc15c300f0eb49e1a931f81a369e8545b9797e4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:05Z\\\",\\\"message\\\":\\\"informers/factory.go:160\\\\nI0219 00:07:05.823460 6561 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 00:07:05.823500 6561 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:05.824052 6561 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 00:07:05.824086 6561 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0219 00:07:05.824094 6561 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 00:07:05.824115 6561 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:05.824123 6561 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 00:07:05.824150 6561 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:05.824178 6561 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:05.824190 6561 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 00:07:05.824239 6561 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:05.824252 6561 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:05.824242 6561 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:31Z\\\",\\\"message\\\":\\\"1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.708981 6960 factory.go:656] Stopping watch factory\\\\nI0219 00:07:31.709012 6960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:31.709026 6960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:07:31.709036 6960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:07:31.709047 6960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:31.709058 6960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:31.709078 6960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:31.709119 6960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:31.709140 6960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709443 6960 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709570 6960 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709749 6960 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.710249 6960 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.422130 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.439951 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.457921 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.472318 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.486002 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:24Z\\\",\\\"message\\\":\\\"2026-02-19T00:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78\\\\n2026-02-19T00:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78 to /host/opt/cni/bin/\\\\n2026-02-19T00:06:39Z [verbose] multus-daemon started\\\\n2026-02-19T00:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.497708 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.497752 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.497770 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.497793 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.497809 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.600515 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.600587 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.600609 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.600636 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.600658 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.703178 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.703231 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.703240 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.703254 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.703265 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.724369 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.724579 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.778774 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 03:59:01.01973357 +0000 UTC Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.806403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.806463 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.806479 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.806504 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.806522 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.815730 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.815795 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.815806 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.815827 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.815839 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.836780 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.842018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.842070 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.842090 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.842115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.842134 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.860966 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.866067 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.866135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.866159 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.866189 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.866214 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.887769 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.893078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.893258 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.893383 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.893477 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.893555 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.912140 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.917126 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.917317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.917445 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.917555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.917644 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.931833 4889 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"61c1eee2-e0b6-477c-92d4-6097001c2ead\\\",\\\"systemUUID\\\":\\\"afc3a4b4-af68-4e64-95b9-aee7b824bf50\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:32 crc kubenswrapper[4889]: E0219 00:07:32.932398 4889 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.934765 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.934826 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.934847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.934878 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:32 crc kubenswrapper[4889]: I0219 00:07:32.934898 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:32Z","lastTransitionTime":"2026-02-19T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.037860 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.037931 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.037988 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.038024 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.038049 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.141409 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.141455 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.141469 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.141490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.141501 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.212675 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/3.log" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.219457 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:07:33 crc kubenswrapper[4889]: E0219 00:07:33.219623 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.243785 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.246445 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.246894 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.247109 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.247341 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.247537 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.264565 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.284587 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.303624 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.331652 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.351854 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.354206 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.354298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.354315 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.354340 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.354360 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.369264 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.383326 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4268dad-36e3-4942-b954-abde3c1450db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f12f59e4f0370d490cf9d4e99785d38f8e115a5c043009088eaa851bc70f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.402534 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.423635 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.437928 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.457397 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.457470 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.457490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.457521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.457542 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.461679 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:24Z\\\",\\\"message\\\":\\\"2026-02-19T00:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78\\\\n2026-02-19T00:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78 to /host/opt/cni/bin/\\\\n2026-02-19T00:06:39Z [verbose] multus-daemon started\\\\n2026-02-19T00:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.513542 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:31Z\\\",\\\"message\\\":\\\"1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.708981 6960 factory.go:656] Stopping watch factory\\\\nI0219 00:07:31.709012 6960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:31.709026 6960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:07:31.709036 6960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:07:31.709047 6960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:31.709058 6960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:31.709078 6960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:31.709119 6960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:31.709140 6960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709443 6960 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709570 6960 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709749 6960 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.710249 6960 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.542772 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.558454 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.560115 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.560173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.560186 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.560229 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.560244 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.575738 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.594714 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.612500 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.663492 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.663546 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.663556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.663580 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.663592 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.724459 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.724504 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.724481 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:33 crc kubenswrapper[4889]: E0219 00:07:33.724641 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:33 crc kubenswrapper[4889]: E0219 00:07:33.724785 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:33 crc kubenswrapper[4889]: E0219 00:07:33.724894 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.765877 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.765968 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.765996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.766031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.766053 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.779082 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:49:50.238630702 +0000 UTC Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.868751 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.868819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.868843 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.868869 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.868887 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.971486 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.971540 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.971555 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.971572 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:33 crc kubenswrapper[4889]: I0219 00:07:33.971583 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:33Z","lastTransitionTime":"2026-02-19T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.075171 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.075278 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.075297 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.075351 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.075368 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.177887 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.177951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.177971 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.177993 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.178006 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.286484 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.286903 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.287041 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.287457 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.287576 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.391234 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.391288 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.391299 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.391319 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.391334 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.494184 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.494268 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.494283 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.494306 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.494321 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.597344 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.597379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.597388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.597406 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.597418 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.700863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.700908 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.700917 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.700935 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.700944 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.725043 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:34 crc kubenswrapper[4889]: E0219 00:07:34.725202 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.742695 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c2d69f3-4487-470a-b1e9-489e05f244eb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5900a3a87cf158594f53dbaa39fd540b595efcd510427439ab0b3cc1249e9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017d8c3a43fb64579576e2b850ed4aa3ce87b5d113910cc61ce0cc96afde401b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2zr8s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lvs75\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.758044 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sw97l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"66e9b544-b66c-43d6-8d8d-d6231a70a6be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fcrh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sw97l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.771682 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4268dad-36e3-4942-b954-abde3c1450db\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f12f59e4f0370d490cf9d4e99785d38f8e115a5c043009088eaa851bc70f8b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d8284937b8eac71e3ecb6d9b9a7516a06310bce99393c0f88cd176538eaab61\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.779373 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:19:33.977983838 +0000 UTC Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.793127 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bbefdc30-a268-4186-bf13-ea846011bd2c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:06:34Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 00:06:28.336277 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 00:06:28.338763 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2408634568/tls.crt::/tmp/serving-cert-2408634568/tls.key\\\\\\\"\\\\nI0219 00:06:34.475327 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:06:34.478845 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:06:34.478869 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:06:34.478893 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:06:34.478899 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:06:34.487503 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0219 00:06:34.487533 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0219 00:06:34.487553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:06:34.487577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:06:34.487584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:06:34.487593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:06:34.487600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0219 00:06:34.488822 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.803842 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.803898 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.803907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.803924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.803937 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.809716 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.823519 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-x8h8n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13343f8d-046b-4e45-8424-a240f34a9667\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795e6f3ea2c7fd373a48a45f8cc7d5783402d366d47142b3c872a3899d934677\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8tj5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-x8h8n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.839207 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"900d194e-937f-4a59-abba-21ed9f94f24f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9eef1c688255edcd55f5f724ca618e1aed7fec3acf314d88726b9855ad804c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s27x5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-pcmlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.860211 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f91d278-7461-4166-8613-4b78aa4e93be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://59a3cc1f480239d20d5a640d40dccbef8b79db4daf78529fb1972213dabaf88f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99778dfb39ab4de6a82a5d1fb8f35c08b4dd234d323d2a29d0f198207472672f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9378429d424f4c685e07001cf1c9f6c4a9b5c85982296982232398b56579f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e820e9576a8b85723cf8f7ed12e58ccd3cf3e37752d241b3533565726894b9cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9153f0bb6e8cd8a59a1f471eef7548193ed0f99cee780ceccf08421849604364\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5270a586da9acd55bd05c22a1357429c277ed01ab5fceab0972c4f319df81270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://436159340b6e48c0d5374e6c6d1b7f449336df2980206067b1f790b6c220fa40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mmd6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-2vx4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.875520 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-mpwsr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb6fe6a2-a58a-4630-9df9-f7840e5088ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99ea3adfef53cef8c792ec90ff39817fda676b5af91da4026eaeec0c8d483984\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2ldxf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:39Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-mpwsr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.893335 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6d86ab2674442c68af7dd2ab22248335e6655ac773f3a706049222a4f14e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.908537 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.908640 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.908746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.908766 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.908792 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.908809 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:34Z","lastTransitionTime":"2026-02-19T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.931898 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:33Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.953255 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-qmhk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dcfc583-b6f2-415a-a4f0-adb70f4865c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:24Z\\\",\\\"message\\\":\\\"2026-02-19T00:06:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78\\\\n2026-02-19T00:06:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2a345155-1c11-40d8-9a03-219e0366bf78 to /host/opt/cni/bin/\\\\n2026-02-19T00:06:39Z [verbose] multus-daemon started\\\\n2026-02-19T00:06:39Z [verbose] Readiness Indicator file check\\\\n2026-02-19T00:07:24Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-srlck\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-qmhk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.980973 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"707d1219-7187-4fda-b155-e6d64687b190\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T00:07:31Z\\\",\\\"message\\\":\\\"1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.708981 6960 factory.go:656] Stopping watch factory\\\\nI0219 00:07:31.709012 6960 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 00:07:31.709026 6960 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 00:07:31.709036 6960 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 00:07:31.709047 6960 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 00:07:31.709058 6960 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 00:07:31.709078 6960 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 00:07:31.709119 6960 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 00:07:31.709140 6960 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709443 6960 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709570 6960 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.709749 6960 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 00:07:31.710249 6960 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:07:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lssf6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-4nwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:34 crc kubenswrapper[4889]: I0219 00:07:34.997079 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4263927e971c52fa7d1d20579e72c99b1ba97f30dec755e34fc13e9d646b1a15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.012030 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.012135 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.012155 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.012196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.012211 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.015504 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9616c5a9-2a57-4d14-8b06-d8c87443b5e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84b73082ddf723f4063dcdcaddcc87657d0392665cfc9336d3ccca5173d40717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0d46ae13fca249948d20092d80298c308deb9e51246b7d9333ac4ff7de076\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4168d2b9826c3152209d7148927a0cfc1053ff52881da186e0db5f414df0f003\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.029873 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41992760-f399-4bd2-9fdc-9052243d56c2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b33f17820d114dd80587f02bb83a5640ac50d15f83fdb2370c3d2360e884bac0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064b6b6e4b736982c7e5d1740076d6270a4678f07621198dd39a5a09091e0cc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f91dd67ae61edafbb15c77e284693bb96500f9b0af7d2418d8f88d6d4b094d77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7f115ad4af24ee3358bb583490779be9015ccc8412e2da2d7cd2b59c8edf6359\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:06:15Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:06:14Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.045283 4889 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:06:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c095bd8ac90b29807244b191900326c846219101c13a08b7d8004e72a0816a16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e85771d6e2f3704ff54095cb103f4d4fad13ed7647af57bd560f04f389cf4616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.116261 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.116311 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.116384 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.116403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.116418 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.218961 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.219339 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.219521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.225831 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.225846 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.329073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.329127 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.329139 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.329191 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.329208 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.435056 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.435088 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.435096 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.435109 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.435117 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.538148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.538184 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.538195 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.538212 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.538245 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.641039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.641085 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.641116 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.641138 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.641192 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.724485 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.724499 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.724680 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:35 crc kubenswrapper[4889]: E0219 00:07:35.724749 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:35 crc kubenswrapper[4889]: E0219 00:07:35.724900 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:35 crc kubenswrapper[4889]: E0219 00:07:35.725076 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.743815 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.743872 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.743885 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.743900 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.743911 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.780450 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:16:50.490548119 +0000 UTC Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.846865 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.846932 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.846959 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.847191 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.847259 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.950688 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.950787 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.951039 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.951068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:35 crc kubenswrapper[4889]: I0219 00:07:35.951114 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:35Z","lastTransitionTime":"2026-02-19T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.054392 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.054476 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.054496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.054524 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.054545 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.158954 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.159031 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.159049 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.159073 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.159091 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.262032 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.262086 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.262105 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.262513 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.262810 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.366696 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.366759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.366772 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.366795 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.366807 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.470329 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.470388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.470401 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.470421 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.470440 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.574277 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.574342 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.574364 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.574395 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.574416 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.677263 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.677334 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.677359 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.677389 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.677411 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.724131 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:36 crc kubenswrapper[4889]: E0219 00:07:36.724304 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.780633 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:25:40.163029212 +0000 UTC Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.780676 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.780722 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.780740 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.780764 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.780780 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.883860 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.883940 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.883964 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.883994 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.884017 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.987951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.988006 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.988022 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.988050 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:36 crc kubenswrapper[4889]: I0219 00:07:36.988067 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:36Z","lastTransitionTime":"2026-02-19T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.091296 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.091379 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.091394 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.091417 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.091432 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.194324 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.194393 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.194412 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.194438 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.194456 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.296403 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.296452 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.296463 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.296477 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.296488 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.398859 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.398955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.398981 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.399018 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.399036 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.501711 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.501759 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.501771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.501790 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.501805 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.605638 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.605701 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.605713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.605732 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.605743 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.708156 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.708202 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.708211 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.708244 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.708253 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.724925 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:37 crc kubenswrapper[4889]: E0219 00:07:37.725108 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.725437 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:37 crc kubenswrapper[4889]: E0219 00:07:37.725531 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.725719 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:37 crc kubenswrapper[4889]: E0219 00:07:37.725808 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.781053 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 23:55:33.657890882 +0000 UTC Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.813955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.814327 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.814487 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.814628 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.814770 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.918068 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.918496 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.918643 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.918851 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:37 crc kubenswrapper[4889]: I0219 00:07:37.919076 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:37Z","lastTransitionTime":"2026-02-19T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.022860 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.022947 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.022977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.023005 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.023022 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.125907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.125967 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.125985 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.126012 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.126029 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.228921 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.228977 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.228996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.229020 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.229038 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.332029 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.332088 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.332104 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.332127 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.332144 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.336545 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.336662 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.336702 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.336742 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.336793 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.336968 4889 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337040 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.337015548 +0000 UTC m=+148.301680579 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337120 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.33710769 +0000 UTC m=+148.301772721 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337212 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337298 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337328 4889 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337406 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.337379548 +0000 UTC m=+148.302044569 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337517 4889 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337591 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.337571834 +0000 UTC m=+148.302236855 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337613 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337658 4889 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337673 4889 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.337728 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.337713169 +0000 UTC m=+148.302378160 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.435456 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.435914 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.436095 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.436287 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.436431 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.539295 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.539344 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.539358 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.539377 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.539390 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.642164 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.642312 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.642386 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.642419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.642444 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.725138 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:38 crc kubenswrapper[4889]: E0219 00:07:38.725373 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.744676 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.744738 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.744760 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.744785 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.744809 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.781944 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:55:13.679752663 +0000 UTC Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.848079 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.848129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.848141 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.848162 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.848175 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.951196 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.951298 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.951317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.951345 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:38 crc kubenswrapper[4889]: I0219 00:07:38.951363 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:38Z","lastTransitionTime":"2026-02-19T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.054046 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.054133 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.054148 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.054174 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.054191 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.157021 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.157069 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.157080 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.157099 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.157114 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.259439 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.259472 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.259481 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.259494 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.259503 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.362460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.362533 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.362549 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.362577 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.362595 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.465954 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.466369 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.466516 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.466652 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.466811 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.571884 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.571944 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.571960 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.571984 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.572002 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.674380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.674449 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.674463 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.674490 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.674505 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.724989 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.725033 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.725006 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:39 crc kubenswrapper[4889]: E0219 00:07:39.725194 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:39 crc kubenswrapper[4889]: E0219 00:07:39.725389 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:39 crc kubenswrapper[4889]: E0219 00:07:39.725602 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.777941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.777999 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.778023 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.778048 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.778063 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.782137 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:38:19.770145826 +0000 UTC Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.881667 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.881703 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.881712 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.881726 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.881737 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.983845 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.983901 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.983927 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.983955 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:39 crc kubenswrapper[4889]: I0219 00:07:39.983978 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:39Z","lastTransitionTime":"2026-02-19T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.087317 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.087360 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.087371 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.087388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.087400 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.190754 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.190802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.190819 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.190841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.190859 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.293846 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.293905 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.293924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.293949 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.293970 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.398071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.398556 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.398571 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.398590 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.398602 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.502054 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.502129 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.502151 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.502182 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.502201 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.604938 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.604996 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.605014 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.605042 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.605065 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.709071 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.709124 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.709144 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.709161 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.709175 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.724664 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:40 crc kubenswrapper[4889]: E0219 00:07:40.724815 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.783044 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:19:08.965860115 +0000 UTC Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.812419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.812464 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.812477 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.812497 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.812507 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.915185 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.915255 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.915265 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.915282 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:40 crc kubenswrapper[4889]: I0219 00:07:40.915292 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:40Z","lastTransitionTime":"2026-02-19T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.018359 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.018389 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.018398 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.018410 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.018419 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.121741 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.121788 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.121797 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.121857 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.121868 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.224803 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.224864 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.224882 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.224906 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.224930 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.327771 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.327841 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.327863 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.327893 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.327915 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.430353 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.430419 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.430458 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.430491 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.430513 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.534091 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.534177 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.534203 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.534272 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.534300 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.637446 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.637532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.637550 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.637577 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.637588 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.724598 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.724735 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:41 crc kubenswrapper[4889]: E0219 00:07:41.724793 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.724752 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:41 crc kubenswrapper[4889]: E0219 00:07:41.724936 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:41 crc kubenswrapper[4889]: E0219 00:07:41.725085 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.740713 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.740772 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.740784 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.740802 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.740815 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.783874 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:31:52.344863227 +0000 UTC Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.843655 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.843710 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.843728 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.843757 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.843780 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.946396 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.946534 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.946558 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.946591 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:41 crc kubenswrapper[4889]: I0219 00:07:41.946613 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:41Z","lastTransitionTime":"2026-02-19T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.050380 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.050433 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.050449 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.050475 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.050495 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.153911 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.154047 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.154078 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.154110 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.154140 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.260847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.260941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.260969 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.261004 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.261043 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.364875 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.364924 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.364934 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.364951 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.364963 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.468783 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.468857 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.468879 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.468907 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.468929 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.571801 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.571847 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.571859 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.571880 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.571930 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.675460 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.675513 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.675532 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.675560 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.675577 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.724478 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:42 crc kubenswrapper[4889]: E0219 00:07:42.724644 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.778388 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.778451 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.778484 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.778521 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.778545 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.784865 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:38:51.120407342 +0000 UTC Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.882128 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.882173 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.882184 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.882202 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.882213 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.986668 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.986727 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.986746 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.986778 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:42 crc kubenswrapper[4889]: I0219 00:07:42.986803 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:42Z","lastTransitionTime":"2026-02-19T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.090660 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.090719 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.090731 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.090749 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.090762 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:43Z","lastTransitionTime":"2026-02-19T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.193444 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.193774 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.193857 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.193941 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.194021 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:43Z","lastTransitionTime":"2026-02-19T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.235438 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.235820 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.236016 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.236153 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.236300 4889 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:07:43Z","lastTransitionTime":"2026-02-19T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.316011 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6"] Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.317743 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.319885 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.321681 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.321955 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.322162 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.362993 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.362965795 podStartE2EDuration="1m6.362965795s" podCreationTimestamp="2026-02-19 00:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.345813825 +0000 UTC m=+89.310478826" watchObservedRunningTime="2026-02-19 00:07:43.362965795 +0000 UTC m=+89.327630826" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.380720 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=33.380683832 podStartE2EDuration="33.380683832s" podCreationTimestamp="2026-02-19 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.363826282 +0000 UTC m=+89.328491343" watchObservedRunningTime="2026-02-19 00:07:43.380683832 +0000 UTC m=+89.345348843" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.410521 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.410494761 podStartE2EDuration="19.410494761s" podCreationTimestamp="2026-02-19 00:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.410156181 +0000 UTC m=+89.374821172" watchObservedRunningTime="2026-02-19 00:07:43.410494761 +0000 UTC m=+89.375159772" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.451199 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=69.451174837 podStartE2EDuration="1m9.451174837s" podCreationTimestamp="2026-02-19 00:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.430114587 +0000 UTC m=+89.394779608" watchObservedRunningTime="2026-02-19 00:07:43.451174837 +0000 UTC m=+89.415839848" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.465995 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-x8h8n" podStartSLOduration=68.465968673 podStartE2EDuration="1m8.465968673s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.465458978 +0000 UTC m=+89.430123969" watchObservedRunningTime="2026-02-19 00:07:43.465968673 +0000 UTC m=+89.430633664" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.500950 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c385fb-5bc6-496e-8bee-43ef829142d9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.501048 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f2c385fb-5bc6-496e-8bee-43ef829142d9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.501088 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f2c385fb-5bc6-496e-8bee-43ef829142d9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.501117 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2c385fb-5bc6-496e-8bee-43ef829142d9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.501149 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f2c385fb-5bc6-496e-8bee-43ef829142d9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.504773 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-2vx4q" podStartSLOduration=68.50475019 podStartE2EDuration="1m8.50475019s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.503352647 +0000 UTC m=+89.468017658" watchObservedRunningTime="2026-02-19 00:07:43.50475019 +0000 UTC m=+89.469415181" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.505207 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podStartSLOduration=68.505200233 podStartE2EDuration="1m8.505200233s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.484599958 +0000 UTC m=+89.449264969" watchObservedRunningTime="2026-02-19 00:07:43.505200233 +0000 UTC m=+89.469865224" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.531504 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lvs75" podStartSLOduration=67.531475675 podStartE2EDuration="1m7.531475675s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.517747411 +0000 UTC m=+89.482412422" watchObservedRunningTime="2026-02-19 00:07:43.531475675 +0000 UTC m=+89.496140676" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605030 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f2c385fb-5bc6-496e-8bee-43ef829142d9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605088 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c385fb-5bc6-496e-8bee-43ef829142d9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605125 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f2c385fb-5bc6-496e-8bee-43ef829142d9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605143 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f2c385fb-5bc6-496e-8bee-43ef829142d9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2c385fb-5bc6-496e-8bee-43ef829142d9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605234 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/f2c385fb-5bc6-496e-8bee-43ef829142d9-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.605304 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/f2c385fb-5bc6-496e-8bee-43ef829142d9-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.606140 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f2c385fb-5bc6-496e-8bee-43ef829142d9-service-ca\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.614153 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2c385fb-5bc6-496e-8bee-43ef829142d9-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.623307 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2c385fb-5bc6-496e-8bee-43ef829142d9-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-m2zr6\" (UID: \"f2c385fb-5bc6-496e-8bee-43ef829142d9\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.634761 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-qmhk6" podStartSLOduration=68.634745791 podStartE2EDuration="1m8.634745791s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.633513723 +0000 UTC m=+89.598178714" watchObservedRunningTime="2026-02-19 00:07:43.634745791 +0000 UTC m=+89.599410782" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.641270 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" Feb 19 00:07:43 crc kubenswrapper[4889]: W0219 00:07:43.659902 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2c385fb_5bc6_496e_8bee_43ef829142d9.slice/crio-0732d14eec7a9ece8506c3a29c3b4f8473d129e603a40676ca0cf57076ebdcce WatchSource:0}: Error finding container 0732d14eec7a9ece8506c3a29c3b4f8473d129e603a40676ca0cf57076ebdcce: Status 404 returned error can't find the container with id 0732d14eec7a9ece8506c3a29c3b4f8473d129e603a40676ca0cf57076ebdcce Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.671750 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-mpwsr" podStartSLOduration=68.671719562 podStartE2EDuration="1m8.671719562s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:43.670873996 +0000 UTC m=+89.635538987" watchObservedRunningTime="2026-02-19 00:07:43.671719562 +0000 UTC m=+89.636384553" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.724764 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.724811 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.724764 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:43 crc kubenswrapper[4889]: E0219 00:07:43.724894 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:43 crc kubenswrapper[4889]: E0219 00:07:43.724997 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:43 crc kubenswrapper[4889]: E0219 00:07:43.725060 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.785659 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:14:54.622391992 +0000 UTC Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.785750 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 00:07:43 crc kubenswrapper[4889]: I0219 00:07:43.794330 4889 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 00:07:44 crc kubenswrapper[4889]: I0219 00:07:44.262328 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" event={"ID":"f2c385fb-5bc6-496e-8bee-43ef829142d9","Type":"ContainerStarted","Data":"9af2034e00590ea38fce347dd664148643ec98addab73ebd368628f200038e87"} Feb 19 00:07:44 crc kubenswrapper[4889]: I0219 00:07:44.262430 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" event={"ID":"f2c385fb-5bc6-496e-8bee-43ef829142d9","Type":"ContainerStarted","Data":"0732d14eec7a9ece8506c3a29c3b4f8473d129e603a40676ca0cf57076ebdcce"} Feb 19 00:07:44 crc kubenswrapper[4889]: I0219 00:07:44.281105 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-m2zr6" podStartSLOduration=69.281080965 podStartE2EDuration="1m9.281080965s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:07:44.280070585 +0000 UTC m=+90.244735576" watchObservedRunningTime="2026-02-19 00:07:44.281080965 +0000 UTC m=+90.245745996" Feb 19 00:07:44 crc kubenswrapper[4889]: I0219 00:07:44.724448 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:44 crc kubenswrapper[4889]: E0219 00:07:44.725646 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:45 crc kubenswrapper[4889]: I0219 00:07:45.724776 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:45 crc kubenswrapper[4889]: I0219 00:07:45.724870 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:45 crc kubenswrapper[4889]: I0219 00:07:45.724796 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:45 crc kubenswrapper[4889]: E0219 00:07:45.724992 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:45 crc kubenswrapper[4889]: E0219 00:07:45.725066 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:45 crc kubenswrapper[4889]: E0219 00:07:45.725133 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:46 crc kubenswrapper[4889]: I0219 00:07:46.725054 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:46 crc kubenswrapper[4889]: E0219 00:07:46.725193 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:47 crc kubenswrapper[4889]: I0219 00:07:47.724517 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:47 crc kubenswrapper[4889]: I0219 00:07:47.724630 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:47 crc kubenswrapper[4889]: E0219 00:07:47.724755 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:47 crc kubenswrapper[4889]: I0219 00:07:47.724657 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:47 crc kubenswrapper[4889]: E0219 00:07:47.724843 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:47 crc kubenswrapper[4889]: E0219 00:07:47.724992 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:47 crc kubenswrapper[4889]: I0219 00:07:47.726034 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:07:47 crc kubenswrapper[4889]: E0219 00:07:47.726285 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:07:48 crc kubenswrapper[4889]: I0219 00:07:48.724389 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:48 crc kubenswrapper[4889]: E0219 00:07:48.724597 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:49 crc kubenswrapper[4889]: I0219 00:07:49.724921 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:49 crc kubenswrapper[4889]: I0219 00:07:49.724959 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:49 crc kubenswrapper[4889]: I0219 00:07:49.724921 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:49 crc kubenswrapper[4889]: E0219 00:07:49.725053 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:49 crc kubenswrapper[4889]: E0219 00:07:49.725282 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:49 crc kubenswrapper[4889]: E0219 00:07:49.725355 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:50 crc kubenswrapper[4889]: I0219 00:07:50.726180 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:50 crc kubenswrapper[4889]: E0219 00:07:50.726725 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:51 crc kubenswrapper[4889]: I0219 00:07:51.724356 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:51 crc kubenswrapper[4889]: I0219 00:07:51.724429 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:51 crc kubenswrapper[4889]: I0219 00:07:51.724476 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:51 crc kubenswrapper[4889]: E0219 00:07:51.724494 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:51 crc kubenswrapper[4889]: E0219 00:07:51.724636 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:51 crc kubenswrapper[4889]: E0219 00:07:51.724668 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:52 crc kubenswrapper[4889]: I0219 00:07:52.724141 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:52 crc kubenswrapper[4889]: E0219 00:07:52.724345 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:53 crc kubenswrapper[4889]: I0219 00:07:53.724267 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:53 crc kubenswrapper[4889]: I0219 00:07:53.724267 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:53 crc kubenswrapper[4889]: E0219 00:07:53.724458 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:53 crc kubenswrapper[4889]: I0219 00:07:53.724275 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:53 crc kubenswrapper[4889]: E0219 00:07:53.724517 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:53 crc kubenswrapper[4889]: E0219 00:07:53.724715 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:54 crc kubenswrapper[4889]: I0219 00:07:54.724629 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:54 crc kubenswrapper[4889]: E0219 00:07:54.727458 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:54 crc kubenswrapper[4889]: I0219 00:07:54.933612 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:54 crc kubenswrapper[4889]: E0219 00:07:54.933894 4889 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:07:54 crc kubenswrapper[4889]: E0219 00:07:54.934018 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs podName:66e9b544-b66c-43d6-8d8d-d6231a70a6be nodeName:}" failed. No retries permitted until 2026-02-19 00:08:58.933989996 +0000 UTC m=+164.898655027 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs") pod "network-metrics-daemon-sw97l" (UID: "66e9b544-b66c-43d6-8d8d-d6231a70a6be") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:07:55 crc kubenswrapper[4889]: I0219 00:07:55.724542 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:55 crc kubenswrapper[4889]: I0219 00:07:55.724634 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:55 crc kubenswrapper[4889]: E0219 00:07:55.724670 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:55 crc kubenswrapper[4889]: E0219 00:07:55.724774 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:55 crc kubenswrapper[4889]: I0219 00:07:55.724848 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:55 crc kubenswrapper[4889]: E0219 00:07:55.724899 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:56 crc kubenswrapper[4889]: I0219 00:07:56.725141 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:56 crc kubenswrapper[4889]: E0219 00:07:56.725614 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:56 crc kubenswrapper[4889]: I0219 00:07:56.740955 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 00:07:57 crc kubenswrapper[4889]: I0219 00:07:57.724985 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:57 crc kubenswrapper[4889]: I0219 00:07:57.725373 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:57 crc kubenswrapper[4889]: E0219 00:07:57.725548 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:57 crc kubenswrapper[4889]: E0219 00:07:57.725747 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:07:57 crc kubenswrapper[4889]: I0219 00:07:57.725863 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:57 crc kubenswrapper[4889]: E0219 00:07:57.726091 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:58 crc kubenswrapper[4889]: I0219 00:07:58.724627 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:07:58 crc kubenswrapper[4889]: E0219 00:07:58.724821 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:07:59 crc kubenswrapper[4889]: I0219 00:07:59.724846 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:07:59 crc kubenswrapper[4889]: I0219 00:07:59.724958 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:07:59 crc kubenswrapper[4889]: I0219 00:07:59.724896 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:07:59 crc kubenswrapper[4889]: E0219 00:07:59.725065 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:07:59 crc kubenswrapper[4889]: E0219 00:07:59.725329 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:07:59 crc kubenswrapper[4889]: E0219 00:07:59.725460 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:00 crc kubenswrapper[4889]: I0219 00:08:00.724717 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:00 crc kubenswrapper[4889]: E0219 00:08:00.724991 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:01 crc kubenswrapper[4889]: I0219 00:08:01.724363 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:01 crc kubenswrapper[4889]: I0219 00:08:01.724363 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:01 crc kubenswrapper[4889]: I0219 00:08:01.724500 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:01 crc kubenswrapper[4889]: E0219 00:08:01.724864 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:01 crc kubenswrapper[4889]: E0219 00:08:01.725069 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:01 crc kubenswrapper[4889]: E0219 00:08:01.725375 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:02 crc kubenswrapper[4889]: I0219 00:08:02.725494 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:02 crc kubenswrapper[4889]: I0219 00:08:02.725607 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:08:02 crc kubenswrapper[4889]: E0219 00:08:02.725790 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:02 crc kubenswrapper[4889]: E0219 00:08:02.725808 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-4nwjd_openshift-ovn-kubernetes(707d1219-7187-4fda-b155-e6d64687b190)\"" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" Feb 19 00:08:03 crc kubenswrapper[4889]: I0219 00:08:03.724059 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:03 crc kubenswrapper[4889]: I0219 00:08:03.724198 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:03 crc kubenswrapper[4889]: I0219 00:08:03.724086 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:03 crc kubenswrapper[4889]: E0219 00:08:03.724391 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:03 crc kubenswrapper[4889]: E0219 00:08:03.724691 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:03 crc kubenswrapper[4889]: E0219 00:08:03.724782 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:04 crc kubenswrapper[4889]: I0219 00:08:04.727540 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:04 crc kubenswrapper[4889]: E0219 00:08:04.728471 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:05 crc kubenswrapper[4889]: I0219 00:08:05.724356 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:05 crc kubenswrapper[4889]: I0219 00:08:05.724431 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:05 crc kubenswrapper[4889]: I0219 00:08:05.724466 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:05 crc kubenswrapper[4889]: E0219 00:08:05.724631 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:05 crc kubenswrapper[4889]: E0219 00:08:05.724763 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:05 crc kubenswrapper[4889]: E0219 00:08:05.724928 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:06 crc kubenswrapper[4889]: I0219 00:08:06.724621 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:06 crc kubenswrapper[4889]: E0219 00:08:06.724827 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:07 crc kubenswrapper[4889]: I0219 00:08:07.724312 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:07 crc kubenswrapper[4889]: I0219 00:08:07.724335 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:07 crc kubenswrapper[4889]: I0219 00:08:07.725036 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:07 crc kubenswrapper[4889]: E0219 00:08:07.725383 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:07 crc kubenswrapper[4889]: E0219 00:08:07.725497 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:07 crc kubenswrapper[4889]: E0219 00:08:07.725559 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:08 crc kubenswrapper[4889]: I0219 00:08:08.724587 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:08 crc kubenswrapper[4889]: E0219 00:08:08.724790 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:09 crc kubenswrapper[4889]: I0219 00:08:09.724960 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:09 crc kubenswrapper[4889]: I0219 00:08:09.725340 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:09 crc kubenswrapper[4889]: E0219 00:08:09.725335 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:09 crc kubenswrapper[4889]: I0219 00:08:09.725752 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:09 crc kubenswrapper[4889]: E0219 00:08:09.725986 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:09 crc kubenswrapper[4889]: E0219 00:08:09.726095 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:10 crc kubenswrapper[4889]: I0219 00:08:10.724647 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:10 crc kubenswrapper[4889]: E0219 00:08:10.724973 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.382177 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/1.log" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.383186 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/0.log" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.383280 4889 generic.go:334] "Generic (PLEG): container finished" podID="7dcfc583-b6f2-415a-a4f0-adb70f4865c8" containerID="8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3" exitCode=1 Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.383372 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerDied","Data":"8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3"} Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.383433 4889 scope.go:117] "RemoveContainer" containerID="af85e84a2fbefbfa73b1add69c90779021f3d9a1b3518b1b512c0e0813d97bec" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.384147 4889 scope.go:117] "RemoveContainer" containerID="8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3" Feb 19 00:08:11 crc kubenswrapper[4889]: E0219 00:08:11.385272 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-qmhk6_openshift-multus(7dcfc583-b6f2-415a-a4f0-adb70f4865c8)\"" pod="openshift-multus/multus-qmhk6" podUID="7dcfc583-b6f2-415a-a4f0-adb70f4865c8" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.407384 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=15.407361442 podStartE2EDuration="15.407361442s" podCreationTimestamp="2026-02-19 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:04.766120659 +0000 UTC m=+110.730785690" watchObservedRunningTime="2026-02-19 00:08:11.407361442 +0000 UTC m=+117.372026433" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.724676 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.724769 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:11 crc kubenswrapper[4889]: I0219 00:08:11.724853 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:11 crc kubenswrapper[4889]: E0219 00:08:11.724904 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:11 crc kubenswrapper[4889]: E0219 00:08:11.725050 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:11 crc kubenswrapper[4889]: E0219 00:08:11.725288 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:12 crc kubenswrapper[4889]: I0219 00:08:12.390192 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/1.log" Feb 19 00:08:12 crc kubenswrapper[4889]: I0219 00:08:12.724276 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:12 crc kubenswrapper[4889]: E0219 00:08:12.724658 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:13 crc kubenswrapper[4889]: I0219 00:08:13.724100 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:13 crc kubenswrapper[4889]: I0219 00:08:13.724270 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:13 crc kubenswrapper[4889]: E0219 00:08:13.724329 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:13 crc kubenswrapper[4889]: I0219 00:08:13.724270 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:13 crc kubenswrapper[4889]: E0219 00:08:13.724484 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:13 crc kubenswrapper[4889]: E0219 00:08:13.724555 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:14 crc kubenswrapper[4889]: I0219 00:08:14.727822 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:14 crc kubenswrapper[4889]: E0219 00:08:14.727960 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:14 crc kubenswrapper[4889]: I0219 00:08:14.729324 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:08:14 crc kubenswrapper[4889]: E0219 00:08:14.738942 4889 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 00:08:14 crc kubenswrapper[4889]: E0219 00:08:14.806261 4889 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.405584 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/3.log" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.408572 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerStarted","Data":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.409139 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.449913 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podStartSLOduration=100.449873703 podStartE2EDuration="1m40.449873703s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:15.447078207 +0000 UTC m=+121.411743208" watchObservedRunningTime="2026-02-19 00:08:15.449873703 +0000 UTC m=+121.414538684" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.644349 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sw97l"] Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.644632 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:15 crc kubenswrapper[4889]: E0219 00:08:15.644808 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.724660 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.724673 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:15 crc kubenswrapper[4889]: I0219 00:08:15.724840 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:15 crc kubenswrapper[4889]: E0219 00:08:15.725067 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:15 crc kubenswrapper[4889]: E0219 00:08:15.726309 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:15 crc kubenswrapper[4889]: E0219 00:08:15.726596 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:17 crc kubenswrapper[4889]: I0219 00:08:17.724798 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:17 crc kubenswrapper[4889]: I0219 00:08:17.724899 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:17 crc kubenswrapper[4889]: I0219 00:08:17.724971 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:17 crc kubenswrapper[4889]: E0219 00:08:17.726100 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:17 crc kubenswrapper[4889]: E0219 00:08:17.725726 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:17 crc kubenswrapper[4889]: E0219 00:08:17.725997 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:17 crc kubenswrapper[4889]: I0219 00:08:17.724994 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:17 crc kubenswrapper[4889]: E0219 00:08:17.726204 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:19 crc kubenswrapper[4889]: I0219 00:08:19.725117 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:19 crc kubenswrapper[4889]: I0219 00:08:19.725181 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:19 crc kubenswrapper[4889]: E0219 00:08:19.725272 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:19 crc kubenswrapper[4889]: I0219 00:08:19.725274 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:19 crc kubenswrapper[4889]: E0219 00:08:19.725432 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:19 crc kubenswrapper[4889]: I0219 00:08:19.725350 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:19 crc kubenswrapper[4889]: E0219 00:08:19.725579 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:19 crc kubenswrapper[4889]: E0219 00:08:19.725635 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:19 crc kubenswrapper[4889]: E0219 00:08:19.807934 4889 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 00:08:21 crc kubenswrapper[4889]: I0219 00:08:21.724352 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:21 crc kubenswrapper[4889]: I0219 00:08:21.724451 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:21 crc kubenswrapper[4889]: E0219 00:08:21.724517 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:21 crc kubenswrapper[4889]: I0219 00:08:21.724473 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:21 crc kubenswrapper[4889]: E0219 00:08:21.724718 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:21 crc kubenswrapper[4889]: E0219 00:08:21.724843 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:21 crc kubenswrapper[4889]: I0219 00:08:21.724896 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:21 crc kubenswrapper[4889]: E0219 00:08:21.724992 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:22 crc kubenswrapper[4889]: I0219 00:08:22.726319 4889 scope.go:117] "RemoveContainer" containerID="8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3" Feb 19 00:08:23 crc kubenswrapper[4889]: I0219 00:08:23.445952 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/1.log" Feb 19 00:08:23 crc kubenswrapper[4889]: I0219 00:08:23.446024 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerStarted","Data":"e6a5d8583a907ae88f80f4668dcf9c28a83c7fa6c71e1f663c6d1a05d56580cd"} Feb 19 00:08:23 crc kubenswrapper[4889]: I0219 00:08:23.724431 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:23 crc kubenswrapper[4889]: I0219 00:08:23.724461 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:23 crc kubenswrapper[4889]: I0219 00:08:23.724452 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:23 crc kubenswrapper[4889]: I0219 00:08:23.724492 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:23 crc kubenswrapper[4889]: E0219 00:08:23.724747 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sw97l" podUID="66e9b544-b66c-43d6-8d8d-d6231a70a6be" Feb 19 00:08:23 crc kubenswrapper[4889]: E0219 00:08:23.724890 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 00:08:23 crc kubenswrapper[4889]: E0219 00:08:23.725057 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 00:08:23 crc kubenswrapper[4889]: E0219 00:08:23.725297 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.724551 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.724590 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.724733 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.724787 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.728156 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.728571 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.729258 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.729704 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.730418 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 00:08:25 crc kubenswrapper[4889]: I0219 00:08:25.731967 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.012432 4889 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.062904 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zmbvn"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.063536 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.074309 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.074665 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.075414 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.075733 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.076398 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.077664 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.077948 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.077973 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.078188 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.078514 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.092074 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pmgn8"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.092656 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.094552 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.095106 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.097567 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.097793 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.097964 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.098177 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.098378 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.098558 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.098903 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.098929 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.100660 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.100909 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-658hr"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.101689 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.102372 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.102909 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.104240 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.104748 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.111492 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p8bs5"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.112261 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.113853 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fp64s"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.114356 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.115183 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.115830 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.121790 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.123071 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29524320-vf894"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.123162 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.123338 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.123903 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.123917 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.124651 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.124868 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.124963 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-55d27"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.125533 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.125623 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.125689 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.126202 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.126697 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-ft7cw"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.127484 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.129869 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.129983 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.130102 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.130176 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.130733 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.130866 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.130955 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xdn9r"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.130965 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.132041 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.132055 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.134578 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.132540 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.132599 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.132807 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.135031 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.135722 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.136113 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.136357 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.136525 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.136999 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.137200 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.137418 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.137540 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.137342 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138580 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138669 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138801 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138825 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138895 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138935 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.138938 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9b8dc"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.139021 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.139084 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.139146 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.139242 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.139287 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.139376 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.148413 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.148727 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.151462 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.151662 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.151785 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152055 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152170 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152512 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152650 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152776 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152851 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.152936 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.154190 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.156305 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.157738 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.176775 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.176797 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.176963 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.177007 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.177393 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.177537 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.177578 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.177991 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178124 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178236 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178370 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178462 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178556 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178771 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178891 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.179087 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.179293 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.178930 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.179454 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.186407 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.186573 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.186703 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.187230 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-stmrq"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.187613 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.188810 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.189764 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.189828 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.190848 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.206395 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.206565 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.206673 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.213849 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.214323 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.214353 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.217931 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.217946 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218144 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-serving-cert\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218179 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218211 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-service-ca\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218244 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-config\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218259 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-config\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218277 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-etcd-client\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218293 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218313 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-oauth-config\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218330 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqklg\" (UniqueName: \"kubernetes.io/projected/953c0d0a-1dec-4045-af86-0c6547b3a336-kube-api-access-zqklg\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218349 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-encryption-config\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218370 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-config\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218432 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4b7z\" (UniqueName: \"kubernetes.io/projected/d772bc14-d22b-4af2-b640-f6a633e2b8b9-kube-api-access-g4b7z\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218467 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-client-ca\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218494 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-config\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218518 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-etcd-serving-ca\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218540 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmpr\" (UniqueName: \"kubernetes.io/projected/316969ac-051e-4145-9536-e65bc7103089-kube-api-access-fcmpr\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218577 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-audit\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218603 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218646 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218605 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-node-pullsecrets\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218781 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-audit-dir\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218814 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ad552dd-0ce7-421e-a8ed-346c5e494d89-auth-proxy-config\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219479 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xf92p"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.218842 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-client-ca\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219549 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-config\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219604 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/953c0d0a-1dec-4045-af86-0c6547b3a336-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219643 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-serving-cert\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219666 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-etcd-client\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219685 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219721 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7c5e287-0d1b-4299-830f-9a32db4a5486-audit-dir\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219743 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-oauth-serving-cert\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219768 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/953c0d0a-1dec-4045-af86-0c6547b3a336-images\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219792 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad552dd-0ce7-421e-a8ed-346c5e494d89-config\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219820 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24j2\" (UniqueName: \"kubernetes.io/projected/0ad552dd-0ce7-421e-a8ed-346c5e494d89-kube-api-access-s24j2\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219842 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219843 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-encryption-config\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219977 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ad552dd-0ce7-421e-a8ed-346c5e494d89-machine-approver-tls\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.219995 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdw94\" (UniqueName: \"kubernetes.io/projected/c7c5e287-0d1b-4299-830f-9a32db4a5486-kube-api-access-bdw94\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220020 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220040 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d772bc14-d22b-4af2-b640-f6a633e2b8b9-serving-cert\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220056 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220072 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953c0d0a-1dec-4045-af86-0c6547b3a336-config\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220088 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-serving-cert\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220107 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24b9h\" (UniqueName: \"kubernetes.io/projected/dff75125-78d5-4ee7-8a76-64087e781dd3-kube-api-access-24b9h\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220152 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220173 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmlq\" (UniqueName: \"kubernetes.io/projected/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-kube-api-access-mnmlq\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220155 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220207 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ztz9\" (UniqueName: \"kubernetes.io/projected/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-kube-api-access-8ztz9\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220263 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220284 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff75125-78d5-4ee7-8a76-64087e781dd3-serving-cert\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220302 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4t6c\" (UniqueName: \"kubernetes.io/projected/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-kube-api-access-p4t6c\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220317 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-audit-policies\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220336 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220351 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/316969ac-051e-4145-9536-e65bc7103089-serving-cert\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220385 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-trusted-ca-bundle\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.220429 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-image-import-ca\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.221199 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.221442 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.224164 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.256021 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.265662 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.275540 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.277495 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.282444 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.282665 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jth4m"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.282978 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.283101 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.283451 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mkqxj"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.283558 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.283721 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.283825 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.284009 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mp2px"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.284043 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.284406 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.284533 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.284692 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.284016 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.286015 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.286504 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.286523 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.288261 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bdl2b"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.288984 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.289522 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.289716 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.290786 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.295508 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.296327 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.301554 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.302301 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.302733 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfp24"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.303159 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x66jj"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.304077 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.304625 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.304962 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.305166 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.305552 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.305567 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.305681 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.309165 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.309711 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.310533 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.314280 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pmgn8"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.316941 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.316979 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-658hr"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.316991 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-vf894"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.318637 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fp64s"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.319830 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p8bs5"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321206 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4t6c\" (UniqueName: \"kubernetes.io/projected/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-kube-api-access-p4t6c\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321247 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-audit-policies\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321265 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321283 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/316969ac-051e-4145-9536-e65bc7103089-serving-cert\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321302 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff75125-78d5-4ee7-8a76-64087e781dd3-serving-cert\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321319 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321335 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-trusted-ca-bundle\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321351 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-image-import-ca\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321373 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-serving-cert\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321387 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321404 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-service-ca\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321420 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-config\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321436 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-config\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321450 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-etcd-client\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321466 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321484 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-oauth-config\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321500 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-encryption-config\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321516 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-config\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321541 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqklg\" (UniqueName: \"kubernetes.io/projected/953c0d0a-1dec-4045-af86-0c6547b3a336-kube-api-access-zqklg\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321558 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4b7z\" (UniqueName: \"kubernetes.io/projected/d772bc14-d22b-4af2-b640-f6a633e2b8b9-kube-api-access-g4b7z\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321572 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-client-ca\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321589 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-config\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321606 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcmpr\" (UniqueName: \"kubernetes.io/projected/316969ac-051e-4145-9536-e65bc7103089-kube-api-access-fcmpr\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321622 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-audit\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321640 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-etcd-serving-ca\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321656 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-node-pullsecrets\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321671 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-audit-dir\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-client-ca\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321704 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-config\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321727 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/953c0d0a-1dec-4045-af86-0c6547b3a336-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321743 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ad552dd-0ce7-421e-a8ed-346c5e494d89-auth-proxy-config\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321766 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-serving-cert\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321781 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-etcd-client\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321810 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321833 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7c5e287-0d1b-4299-830f-9a32db4a5486-audit-dir\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321850 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-oauth-serving-cert\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321868 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/953c0d0a-1dec-4045-af86-0c6547b3a336-images\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321889 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad552dd-0ce7-421e-a8ed-346c5e494d89-config\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321906 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s24j2\" (UniqueName: \"kubernetes.io/projected/0ad552dd-0ce7-421e-a8ed-346c5e494d89-kube-api-access-s24j2\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321922 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-encryption-config\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321939 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ad552dd-0ce7-421e-a8ed-346c5e494d89-machine-approver-tls\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321956 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdw94\" (UniqueName: \"kubernetes.io/projected/c7c5e287-0d1b-4299-830f-9a32db4a5486-kube-api-access-bdw94\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321974 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.321992 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d772bc14-d22b-4af2-b640-f6a633e2b8b9-serving-cert\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322008 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322023 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-serving-cert\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322037 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24b9h\" (UniqueName: \"kubernetes.io/projected/dff75125-78d5-4ee7-8a76-64087e781dd3-kube-api-access-24b9h\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322209 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953c0d0a-1dec-4045-af86-0c6547b3a336-config\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322244 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnmlq\" (UniqueName: \"kubernetes.io/projected/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-kube-api-access-mnmlq\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322260 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ztz9\" (UniqueName: \"kubernetes.io/projected/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-kube-api-access-8ztz9\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.322282 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.323348 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/953c0d0a-1dec-4045-af86-0c6547b3a336-images\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.324334 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ad552dd-0ce7-421e-a8ed-346c5e494d89-config\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.324758 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-audit\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.324787 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-audit-policies\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.325236 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326013 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-config\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326062 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9b8dc"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326271 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326414 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326565 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-config\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326826 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d772bc14-d22b-4af2-b640-f6a633e2b8b9-service-ca-bundle\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.326904 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.327270 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-config\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.327638 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-service-ca\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.327381 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-etcd-serving-ca\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.327749 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-node-pullsecrets\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.328105 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.328193 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-audit-dir\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.330038 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0ad552dd-0ce7-421e-a8ed-346c5e494d89-auth-proxy-config\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.330280 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-image-import-ca\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.330328 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-trusted-ca-bundle\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.330790 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-config\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.331018 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zmbvn"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.331362 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/316969ac-051e-4145-9536-e65bc7103089-serving-cert\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.331379 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.331960 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.331997 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.332083 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c7c5e287-0d1b-4299-830f-9a32db4a5486-audit-dir\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.332176 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0ad552dd-0ce7-421e-a8ed-346c5e494d89-machine-approver-tls\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.332252 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff75125-78d5-4ee7-8a76-64087e781dd3-serving-cert\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.332800 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-client-ca\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.332902 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-trusted-ca-bundle\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.332943 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.333546 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-serving-cert\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.330697 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-client-ca\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.333841 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-serving-cert\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.333901 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-etcd-client\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.334076 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-oauth-serving-cert\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.334138 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-stmrq"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.334504 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-serving-cert\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.334643 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-config\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.334721 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c7c5e287-0d1b-4299-830f-9a32db4a5486-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.335145 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c7c5e287-0d1b-4299-830f-9a32db4a5486-encryption-config\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.335166 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.335751 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-console-oauth-config\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.336165 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-etcd-client\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.336328 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/953c0d0a-1dec-4045-af86-0c6547b3a336-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.337691 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mkqxj"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.338274 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/953c0d0a-1dec-4045-af86-0c6547b3a336-config\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.339314 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.341742 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-encryption-config\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.341851 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-55d27"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.343234 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xdn9r"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.352129 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.354943 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d772bc14-d22b-4af2-b640-f6a633e2b8b9-serving-cert\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.361813 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.363141 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.365054 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.367392 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.368396 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9sgpt"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.370539 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-x7hdg"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.370809 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.370995 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.371079 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.371840 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ft7cw"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.372848 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.374308 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.375340 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.377068 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.378693 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.379775 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mp2px"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.380865 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9sgpt"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.381963 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.383041 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.384103 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x7hdg"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.385304 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.386386 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.386996 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x66jj"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.388640 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.389985 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfp24"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.391048 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.392182 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bdl2b"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.393574 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.394621 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx"] Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.405538 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.426569 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.446009 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.466744 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.486086 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.506881 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.526259 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.545887 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.566065 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.586749 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.606128 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.625886 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.646962 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.707107 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.724959 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf74118f-1818-4680-98a6-a32fe3cc2725-config\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725093 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww7x6\" (UniqueName: \"kubernetes.io/projected/61f5ec49-57ce-4a4a-86b6-fed43a63c82e-kube-api-access-ww7x6\") pod \"downloads-7954f5f757-fp64s\" (UID: \"61f5ec49-57ce-4a4a-86b6-fed43a63c82e\") " pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725188 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725291 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/585fe6a3-bec2-42fb-bc1c-75203481f19a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725357 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkdsn\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-kube-api-access-bkdsn\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725444 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrrvt\" (UniqueName: \"kubernetes.io/projected/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-kube-api-access-mrrvt\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725547 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725609 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-bound-sa-token\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725658 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725707 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a539b86-26c9-4270-aa81-6ca18af85223-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725770 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv79r\" (UniqueName: \"kubernetes.io/projected/956660a2-d159-418c-933b-147711dd34b9-kube-api-access-kv79r\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725806 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725846 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0bdda273-5648-4fa5-868c-48142d764012-serviceca\") pod \"image-pruner-29524320-vf894\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725886 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.725965 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726019 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65h2h\" (UniqueName: \"kubernetes.io/projected/f3d1db96-e34d-4c20-9556-edec9e27858c-kube-api-access-65h2h\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726069 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-trusted-ca\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726117 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-policies\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726198 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xzx6\" (UniqueName: \"kubernetes.io/projected/5a539b86-26c9-4270-aa81-6ca18af85223-kube-api-access-7xzx6\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726336 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726402 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-tls\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726462 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwpp\" (UniqueName: \"kubernetes.io/projected/0bdda273-5648-4fa5-868c-48142d764012-kube-api-access-gwwpp\") pod \"image-pruner-29524320-vf894\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726512 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/956660a2-d159-418c-933b-147711dd34b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726726 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a539b86-26c9-4270-aa81-6ca18af85223-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726835 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-certificates\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.726849 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 00:08:34 crc kubenswrapper[4889]: E0219 00:08:34.727006 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.226978553 +0000 UTC m=+141.191643624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727182 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727316 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727480 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/956660a2-d159-418c-933b-147711dd34b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727585 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727740 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727868 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/585fe6a3-bec2-42fb-bc1c-75203481f19a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.727972 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728080 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf74118f-1818-4680-98a6-a32fe3cc2725-serving-cert\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728297 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf74118f-1818-4680-98a6-a32fe3cc2725-trusted-ca\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728433 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8jz\" (UniqueName: \"kubernetes.io/projected/cf74118f-1818-4680-98a6-a32fe3cc2725-kube-api-access-rr8jz\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728589 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728679 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a539b86-26c9-4270-aa81-6ca18af85223-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728714 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-dir\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.728815 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.749069 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.767202 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.790141 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.807497 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.827587 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.830616 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:34 crc kubenswrapper[4889]: E0219 00:08:34.831126 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.331088806 +0000 UTC m=+141.295753837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.831267 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ad046d9-2ae3-470e-9cde-bbed21290815-proxy-tls\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.831443 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e55dbef-89d4-4f2f-a4db-714290b34dad-apiservice-cert\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.831484 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-proxy-tls\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.831614 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.831769 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-tls\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.831964 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832084 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzccx\" (UniqueName: \"kubernetes.io/projected/cb574028-4d71-408f-9fe7-f2497636fffd-kube-api-access-jzccx\") pod \"migrator-59844c95c7-h6lzz\" (UID: \"cb574028-4d71-408f-9fe7-f2497636fffd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832195 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/956660a2-d159-418c-933b-147711dd34b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832277 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a539b86-26c9-4270-aa81-6ca18af85223-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832323 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dc128cd-807d-4a60-9629-be4dbf53cf9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832402 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832444 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/356a8f86-9fc9-4c30-accb-4866e29407fd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-grf28\" (UID: \"356a8f86-9fc9-4c30-accb-4866e29407fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832518 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/956660a2-d159-418c-933b-147711dd34b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832589 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-config\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832630 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832725 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rgjh\" (UniqueName: \"kubernetes.io/projected/ac4fd89a-3d15-4886-ad35-318136b7a519-kube-api-access-6rgjh\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: E0219 00:08:34.832774 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.332741386 +0000 UTC m=+141.297406417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832826 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/585fe6a3-bec2-42fb-bc1c-75203481f19a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832880 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e3750c-2800-4e2b-90b5-7fb1c06af232-config-volume\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832930 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr8jz\" (UniqueName: \"kubernetes.io/projected/cf74118f-1818-4680-98a6-a32fe3cc2725-kube-api-access-rr8jz\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832953 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/956660a2-d159-418c-933b-147711dd34b9-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.832973 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b73e42-e2e2-4e2a-8df3-879f78fba75f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833158 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbjv\" (UniqueName: \"kubernetes.io/projected/851d551b-ead7-4bd9-8d0a-88227edc9aad-kube-api-access-ktbjv\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833304 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf74118f-1818-4680-98a6-a32fe3cc2725-config\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833365 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-registration-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833400 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/851d551b-ead7-4bd9-8d0a-88227edc9aad-profile-collector-cert\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833455 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktwhz\" (UniqueName: \"kubernetes.io/projected/4e55dbef-89d4-4f2f-a4db-714290b34dad-kube-api-access-ktwhz\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833503 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e55dbef-89d4-4f2f-a4db-714290b34dad-webhook-cert\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833543 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5d8865d-92b4-4a08-a6f6-f10639d9b709-certs\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833581 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833624 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-metrics-certs\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833665 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkdsn\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-kube-api-access-bkdsn\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833736 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrrvt\" (UniqueName: \"kubernetes.io/projected/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-kube-api-access-mrrvt\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833824 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e039dfef-e576-4bf7-8d87-9569ae038395-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mp2px\" (UID: \"e039dfef-e576-4bf7-8d87-9569ae038395\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833880 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-client\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833932 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-stats-auth\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.833980 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4fd89a-3d15-4886-ad35-318136b7a519-secret-volume\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834016 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-mountpoint-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834054 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7422e6b-7aee-4c6f-95e2-233d4040f239-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834099 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7dc128cd-807d-4a60-9629-be4dbf53cf9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834169 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834269 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a539b86-26c9-4270-aa81-6ca18af85223-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834308 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm4hp\" (UniqueName: \"kubernetes.io/projected/7dc128cd-807d-4a60-9629-be4dbf53cf9b-kube-api-access-bm4hp\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834390 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8e3750c-2800-4e2b-90b5-7fb1c06af232-metrics-tls\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834436 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d67eea0c-6059-4e76-8bdf-0fb3c25e2717-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79hxc\" (UID: \"d67eea0c-6059-4e76-8bdf-0fb3c25e2717\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834490 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0bdda273-5648-4fa5-868c-48142d764012-serviceca\") pod \"image-pruner-29524320-vf894\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834526 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv79r\" (UniqueName: \"kubernetes.io/projected/956660a2-d159-418c-933b-147711dd34b9-kube-api-access-kv79r\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834568 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-images\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834632 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/585fe6a3-bec2-42fb-bc1c-75203481f19a-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834638 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-config\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834841 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xzx6\" (UniqueName: \"kubernetes.io/projected/5a539b86-26c9-4270-aa81-6ca18af85223-kube-api-access-7xzx6\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834911 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.834980 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-config\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.835037 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-signing-cabundle\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.835114 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hbfp\" (UniqueName: \"kubernetes.io/projected/b24fb26c-0cf1-4094-8c3d-d274fee59a7a-kube-api-access-9hbfp\") pod \"package-server-manager-789f6589d5-8sg2r\" (UID: \"b24fb26c-0cf1-4094-8c3d-d274fee59a7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.835179 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-ca\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.837148 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0bdda273-5648-4fa5-868c-48142d764012-serviceca\") pod \"image-pruner-29524320-vf894\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.838375 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dc128cd-807d-4a60-9629-be4dbf53cf9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.838456 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2665bd86-15fa-4593-9d82-1e3753db401d-metrics-tls\") pod \"dns-operator-744455d44c-bdl2b\" (UID: \"2665bd86-15fa-4593-9d82-1e3753db401d\") " pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.838672 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwpp\" (UniqueName: \"kubernetes.io/projected/0bdda273-5648-4fa5-868c-48142d764012-kube-api-access-gwwpp\") pod \"image-pruner-29524320-vf894\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.838933 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf74118f-1818-4680-98a6-a32fe3cc2725-config\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.838933 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7422e6b-7aee-4c6f-95e2-233d4040f239-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839030 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4xv\" (UniqueName: \"kubernetes.io/projected/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-kube-api-access-ng4xv\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839074 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839113 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4fd89a-3d15-4886-ad35-318136b7a519-config-volume\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839154 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-certificates\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839189 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-csi-data-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839249 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drrjd\" (UniqueName: \"kubernetes.io/projected/2665bd86-15fa-4593-9d82-1e3753db401d-kube-api-access-drrjd\") pod \"dns-operator-744455d44c-bdl2b\" (UID: \"2665bd86-15fa-4593-9d82-1e3753db401d\") " pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839366 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.839423 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34cffbee-ca52-499c-a1cb-4f127b9d352f-serving-cert\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.840106 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/956660a2-d159-418c-933b-147711dd34b9-serving-cert\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.840789 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.840866 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b73e42-e2e2-4e2a-8df3-879f78fba75f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.840987 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a539b86-26c9-4270-aa81-6ca18af85223-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841015 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxbs\" (UniqueName: \"kubernetes.io/projected/36b73e42-e2e2-4e2a-8df3-879f78fba75f-kube-api-access-xwxbs\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841145 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-signing-key\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841263 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2ef0e062-1746-4c5e-9e1f-8f75282a9404-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841370 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcbzq\" (UniqueName: \"kubernetes.io/projected/f7a4c945-2b4c-4b30-ad06-9158ce04018e-kube-api-access-hcbzq\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841448 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841631 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdwll\" (UniqueName: \"kubernetes.io/projected/2ef0e062-1746-4c5e-9e1f-8f75282a9404-kube-api-access-wdwll\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841702 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvff\" (UniqueName: \"kubernetes.io/projected/e039dfef-e576-4bf7-8d87-9569ae038395-kube-api-access-xvvff\") pod \"multus-admission-controller-857f4d67dd-mp2px\" (UID: \"e039dfef-e576-4bf7-8d87-9569ae038395\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.841952 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf74118f-1818-4680-98a6-a32fe3cc2725-serving-cert\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.842034 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf74118f-1818-4680-98a6-a32fe3cc2725-trusted-ca\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.842098 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljs6n\" (UniqueName: \"kubernetes.io/projected/b8e3750c-2800-4e2b-90b5-7fb1c06af232-kube-api-access-ljs6n\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.842267 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.842029 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-config\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.842843 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.843337 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.843543 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-default-certificate\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.843706 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-certificates\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.843875 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-dir\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.843883 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-dir\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.844187 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a539b86-26c9-4270-aa81-6ca18af85223-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.844206 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.844301 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.844539 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/851d551b-ead7-4bd9-8d0a-88227edc9aad-srv-cert\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.844598 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww7x6\" (UniqueName: \"kubernetes.io/projected/61f5ec49-57ce-4a4a-86b6-fed43a63c82e-kube-api-access-ww7x6\") pod \"downloads-7954f5f757-fp64s\" (UID: \"61f5ec49-57ce-4a4a-86b6-fed43a63c82e\") " pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.844641 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9btx\" (UniqueName: \"kubernetes.io/projected/d47bfc68-8f0f-4717-be7a-fccf98897cda-kube-api-access-n9btx\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845373 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845474 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8297g\" (UniqueName: \"kubernetes.io/projected/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-kube-api-access-8297g\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845519 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/585fe6a3-bec2-42fb-bc1c-75203481f19a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845661 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cf74118f-1818-4680-98a6-a32fe3cc2725-trusted-ca\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845770 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqbd\" (UniqueName: \"kubernetes.io/projected/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-kube-api-access-7fqbd\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845820 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5d8865d-92b4-4a08-a6f6-f10639d9b709-node-bootstrap-token\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845847 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2ef0e062-1746-4c5e-9e1f-8f75282a9404-srv-cert\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845872 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-socket-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845894 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-plugins-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845916 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cffbee-ca52-499c-a1cb-4f127b9d352f-config\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845949 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-service-ca\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845973 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw6c\" (UniqueName: \"kubernetes.io/projected/d67eea0c-6059-4e76-8bdf-0fb3c25e2717-kube-api-access-vfw6c\") pod \"control-plane-machine-set-operator-78cbb6b69f-79hxc\" (UID: \"d67eea0c-6059-4e76-8bdf-0fb3c25e2717\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.845998 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846051 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-bound-sa-token\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846075 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846100 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d47bfc68-8f0f-4717-be7a-fccf98897cda-service-ca-bundle\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846130 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdmqn\" (UniqueName: \"kubernetes.io/projected/7ad046d9-2ae3-470e-9cde-bbed21290815-kube-api-access-mdmqn\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846157 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b24fb26c-0cf1-4094-8c3d-d274fee59a7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8sg2r\" (UID: \"b24fb26c-0cf1-4094-8c3d-d274fee59a7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846180 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c086d0f-4dda-4123-b2e9-fe95519e76ff-cert\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846203 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846245 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mvx\" (UniqueName: \"kubernetes.io/projected/f5d8865d-92b4-4a08-a6f6-f10639d9b709-kube-api-access-65mvx\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846269 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846293 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846525 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a539b86-26c9-4270-aa81-6ca18af85223-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846539 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7422e6b-7aee-4c6f-95e2-233d4040f239-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846645 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e55dbef-89d4-4f2f-a4db-714290b34dad-tmpfs\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846710 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdht\" (UniqueName: \"kubernetes.io/projected/23cfc2c7-bd77-4361-b718-c0b6f13d8475-kube-api-access-czdht\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846816 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65h2h\" (UniqueName: \"kubernetes.io/projected/f3d1db96-e34d-4c20-9556-edec9e27858c-kube-api-access-65h2h\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846889 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvmg\" (UniqueName: \"kubernetes.io/projected/356a8f86-9fc9-4c30-accb-4866e29407fd-kube-api-access-8zvmg\") pod \"cluster-samples-operator-665b6dd947-grf28\" (UID: \"356a8f86-9fc9-4c30-accb-4866e29407fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.846948 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhr6f\" (UniqueName: \"kubernetes.io/projected/7c086d0f-4dda-4123-b2e9-fe95519e76ff-kube-api-access-nhr6f\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.847442 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.847934 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.847984 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.848065 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23cfc2c7-bd77-4361-b718-c0b6f13d8475-serving-cert\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.848310 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.848419 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-trusted-ca\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.848186 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.848676 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lghrw\" (UniqueName: \"kubernetes.io/projected/34cffbee-ca52-499c-a1cb-4f127b9d352f-kube-api-access-lghrw\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.848769 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-policies\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.849092 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.849963 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-tls\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.850089 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-policies\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.850105 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf74118f-1818-4680-98a6-a32fe3cc2725-serving-cert\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.851304 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-trusted-ca\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.857120 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.857211 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.857289 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.857525 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.857581 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.857659 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.858699 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.860527 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/585fe6a3-bec2-42fb-bc1c-75203481f19a-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.878411 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.887725 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.906285 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.926691 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.946303 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.950977 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:34 crc kubenswrapper[4889]: E0219 00:08:34.951131 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.451096949 +0000 UTC m=+141.415761980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951418 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34cffbee-ca52-499c-a1cb-4f127b9d352f-serving-cert\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951484 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-csi-data-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951537 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drrjd\" (UniqueName: \"kubernetes.io/projected/2665bd86-15fa-4593-9d82-1e3753db401d-kube-api-access-drrjd\") pod \"dns-operator-744455d44c-bdl2b\" (UID: \"2665bd86-15fa-4593-9d82-1e3753db401d\") " pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951608 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b73e42-e2e2-4e2a-8df3-879f78fba75f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951651 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxbs\" (UniqueName: \"kubernetes.io/projected/36b73e42-e2e2-4e2a-8df3-879f78fba75f-kube-api-access-xwxbs\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-signing-key\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951722 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2ef0e062-1746-4c5e-9e1f-8f75282a9404-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951771 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcbzq\" (UniqueName: \"kubernetes.io/projected/f7a4c945-2b4c-4b30-ad06-9158ce04018e-kube-api-access-hcbzq\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951807 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvff\" (UniqueName: \"kubernetes.io/projected/e039dfef-e576-4bf7-8d87-9569ae038395-kube-api-access-xvvff\") pod \"multus-admission-controller-857f4d67dd-mp2px\" (UID: \"e039dfef-e576-4bf7-8d87-9569ae038395\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951843 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdwll\" (UniqueName: \"kubernetes.io/projected/2ef0e062-1746-4c5e-9e1f-8f75282a9404-kube-api-access-wdwll\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951845 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-csi-data-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.951876 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljs6n\" (UniqueName: \"kubernetes.io/projected/b8e3750c-2800-4e2b-90b5-7fb1c06af232-kube-api-access-ljs6n\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952204 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-default-certificate\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952286 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/851d551b-ead7-4bd9-8d0a-88227edc9aad-srv-cert\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952343 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9btx\" (UniqueName: \"kubernetes.io/projected/d47bfc68-8f0f-4717-be7a-fccf98897cda-kube-api-access-n9btx\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952384 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8297g\" (UniqueName: \"kubernetes.io/projected/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-kube-api-access-8297g\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952421 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqbd\" (UniqueName: \"kubernetes.io/projected/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-kube-api-access-7fqbd\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952466 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5d8865d-92b4-4a08-a6f6-f10639d9b709-node-bootstrap-token\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952503 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2ef0e062-1746-4c5e-9e1f-8f75282a9404-srv-cert\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952552 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cffbee-ca52-499c-a1cb-4f127b9d352f-config\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952588 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-socket-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952613 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-plugins-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952658 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfw6c\" (UniqueName: \"kubernetes.io/projected/d67eea0c-6059-4e76-8bdf-0fb3c25e2717-kube-api-access-vfw6c\") pod \"control-plane-machine-set-operator-78cbb6b69f-79hxc\" (UID: \"d67eea0c-6059-4e76-8bdf-0fb3c25e2717\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952717 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-service-ca\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952764 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d47bfc68-8f0f-4717-be7a-fccf98897cda-service-ca-bundle\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952809 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdmqn\" (UniqueName: \"kubernetes.io/projected/7ad046d9-2ae3-470e-9cde-bbed21290815-kube-api-access-mdmqn\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952849 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952889 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b24fb26c-0cf1-4094-8c3d-d274fee59a7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8sg2r\" (UID: \"b24fb26c-0cf1-4094-8c3d-d274fee59a7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952927 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c086d0f-4dda-4123-b2e9-fe95519e76ff-cert\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.952968 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65mvx\" (UniqueName: \"kubernetes.io/projected/f5d8865d-92b4-4a08-a6f6-f10639d9b709-kube-api-access-65mvx\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953011 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e55dbef-89d4-4f2f-a4db-714290b34dad-tmpfs\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953050 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czdht\" (UniqueName: \"kubernetes.io/projected/23cfc2c7-bd77-4361-b718-c0b6f13d8475-kube-api-access-czdht\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953119 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7422e6b-7aee-4c6f-95e2-233d4040f239-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953052 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-plugins-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953159 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhr6f\" (UniqueName: \"kubernetes.io/projected/7c086d0f-4dda-4123-b2e9-fe95519e76ff-kube-api-access-nhr6f\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953190 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953283 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvmg\" (UniqueName: \"kubernetes.io/projected/356a8f86-9fc9-4c30-accb-4866e29407fd-kube-api-access-8zvmg\") pod \"cluster-samples-operator-665b6dd947-grf28\" (UID: \"356a8f86-9fc9-4c30-accb-4866e29407fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953334 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lghrw\" (UniqueName: \"kubernetes.io/projected/34cffbee-ca52-499c-a1cb-4f127b9d352f-kube-api-access-lghrw\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953283 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-socket-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953373 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23cfc2c7-bd77-4361-b718-c0b6f13d8475-serving-cert\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953414 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953459 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-proxy-tls\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953505 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ad046d9-2ae3-470e-9cde-bbed21290815-proxy-tls\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953540 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e55dbef-89d4-4f2f-a4db-714290b34dad-apiservice-cert\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953589 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953633 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953673 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzccx\" (UniqueName: \"kubernetes.io/projected/cb574028-4d71-408f-9fe7-f2497636fffd-kube-api-access-jzccx\") pod \"migrator-59844c95c7-h6lzz\" (UID: \"cb574028-4d71-408f-9fe7-f2497636fffd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953725 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-service-ca\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953737 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/4e55dbef-89d4-4f2f-a4db-714290b34dad-tmpfs\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953728 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dc128cd-807d-4a60-9629-be4dbf53cf9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953821 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/356a8f86-9fc9-4c30-accb-4866e29407fd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-grf28\" (UID: \"356a8f86-9fc9-4c30-accb-4866e29407fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953865 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-config\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953905 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e3750c-2800-4e2b-90b5-7fb1c06af232-config-volume\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.953944 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rgjh\" (UniqueName: \"kubernetes.io/projected/ac4fd89a-3d15-4886-ad35-318136b7a519-kube-api-access-6rgjh\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954005 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b73e42-e2e2-4e2a-8df3-879f78fba75f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954045 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbjv\" (UniqueName: \"kubernetes.io/projected/851d551b-ead7-4bd9-8d0a-88227edc9aad-kube-api-access-ktbjv\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954078 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-registration-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954121 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/851d551b-ead7-4bd9-8d0a-88227edc9aad-profile-collector-cert\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954152 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktwhz\" (UniqueName: \"kubernetes.io/projected/4e55dbef-89d4-4f2f-a4db-714290b34dad-kube-api-access-ktwhz\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954188 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e55dbef-89d4-4f2f-a4db-714290b34dad-webhook-cert\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954256 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5d8865d-92b4-4a08-a6f6-f10639d9b709-certs\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954308 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: E0219 00:08:34.954328 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.454299278 +0000 UTC m=+141.418964349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954386 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-metrics-certs\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954520 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-client\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954581 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e039dfef-e576-4bf7-8d87-9569ae038395-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mp2px\" (UID: \"e039dfef-e576-4bf7-8d87-9569ae038395\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954640 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-mountpoint-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954691 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-stats-auth\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954744 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4fd89a-3d15-4886-ad35-318136b7a519-secret-volume\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954762 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-config\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954797 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7422e6b-7aee-4c6f-95e2-233d4040f239-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954855 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-registration-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954852 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7dc128cd-807d-4a60-9629-be4dbf53cf9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954929 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.954976 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm4hp\" (UniqueName: \"kubernetes.io/projected/7dc128cd-807d-4a60-9629-be4dbf53cf9b-kube-api-access-bm4hp\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955004 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8e3750c-2800-4e2b-90b5-7fb1c06af232-metrics-tls\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955030 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d67eea0c-6059-4e76-8bdf-0fb3c25e2717-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79hxc\" (UID: \"d67eea0c-6059-4e76-8bdf-0fb3c25e2717\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955078 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-images\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955113 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-config\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955146 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955169 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-config\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955209 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-signing-cabundle\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955252 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b73e42-e2e2-4e2a-8df3-879f78fba75f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955260 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dc128cd-807d-4a60-9629-be4dbf53cf9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955291 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2665bd86-15fa-4593-9d82-1e3753db401d-metrics-tls\") pod \"dns-operator-744455d44c-bdl2b\" (UID: \"2665bd86-15fa-4593-9d82-1e3753db401d\") " pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955320 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hbfp\" (UniqueName: \"kubernetes.io/projected/b24fb26c-0cf1-4094-8c3d-d274fee59a7a-kube-api-access-9hbfp\") pod \"package-server-manager-789f6589d5-8sg2r\" (UID: \"b24fb26c-0cf1-4094-8c3d-d274fee59a7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955350 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-ca\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955410 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955438 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4fd89a-3d15-4886-ad35-318136b7a519-config-volume\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955466 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7422e6b-7aee-4c6f-95e2-233d4040f239-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955469 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-mountpoint-dir\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.955496 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4xv\" (UniqueName: \"kubernetes.io/projected/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-kube-api-access-ng4xv\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.956686 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-ca\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.956919 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-auth-proxy-config\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.957472 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d47bfc68-8f0f-4717-be7a-fccf98897cda-service-ca-bundle\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.957490 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23cfc2c7-bd77-4361-b718-c0b6f13d8475-config\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.957686 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7422e6b-7aee-4c6f-95e2-233d4040f239-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.958982 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7dc128cd-807d-4a60-9629-be4dbf53cf9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.959576 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23cfc2c7-bd77-4361-b718-c0b6f13d8475-serving-cert\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.959674 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-default-certificate\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.959903 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23cfc2c7-bd77-4361-b718-c0b6f13d8475-etcd-client\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.959930 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b73e42-e2e2-4e2a-8df3-879f78fba75f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.960120 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-stats-auth\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.960834 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7422e6b-7aee-4c6f-95e2-233d4040f239-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.962420 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.962428 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7dc128cd-807d-4a60-9629-be4dbf53cf9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.962753 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d47bfc68-8f0f-4717-be7a-fccf98897cda-metrics-certs\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.966005 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 00:08:34 crc kubenswrapper[4889]: I0219 00:08:34.986264 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.000899 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e55dbef-89d4-4f2f-a4db-714290b34dad-apiservice-cert\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.001146 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e55dbef-89d4-4f2f-a4db-714290b34dad-webhook-cert\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.007579 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.026617 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.047300 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.057874 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.058129 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.558099341 +0000 UTC m=+141.522764362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.058981 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.059450 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.559434551 +0000 UTC m=+141.524099582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.066680 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.087356 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.099504 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f5d8865d-92b4-4a08-a6f6-f10639d9b709-certs\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.106712 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.125816 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.138341 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f5d8865d-92b4-4a08-a6f6-f10639d9b709-node-bootstrap-token\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.146970 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.160397 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.160593 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.660555332 +0000 UTC m=+141.625220373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.160687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.161568 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.661552113 +0000 UTC m=+141.626217114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.181381 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.186050 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.188136 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-signing-cabundle\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.206003 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.218206 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-signing-key\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.225708 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.247619 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.257804 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-config\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.262790 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.262911 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.76287933 +0000 UTC m=+141.727544341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.264504 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.264943 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.764904882 +0000 UTC m=+141.729569883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.266890 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.287202 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.304077 4889 request.go:700] Waited for 1.019292993s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0 Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.306044 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.311319 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.327652 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.340992 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e039dfef-e576-4bf7-8d87-9569ae038395-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mp2px\" (UID: \"e039dfef-e576-4bf7-8d87-9569ae038395\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.346919 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.365481 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.365819 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.865763384 +0000 UTC m=+141.830428385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.366017 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.366265 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.366847 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.866831098 +0000 UTC m=+141.831496099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.386704 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.406756 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.417966 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/356a8f86-9fc9-4c30-accb-4866e29407fd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-grf28\" (UID: \"356a8f86-9fc9-4c30-accb-4866e29407fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.427138 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.447323 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.457293 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/851d551b-ead7-4bd9-8d0a-88227edc9aad-srv-cert\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.466856 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.467453 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.467824 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.967787793 +0000 UTC m=+141.932452844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.469511 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.469852 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:35.969840816 +0000 UTC m=+141.934505817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.477215 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2ef0e062-1746-4c5e-9e1f-8f75282a9404-profile-collector-cert\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.480760 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/851d551b-ead7-4bd9-8d0a-88227edc9aad-profile-collector-cert\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.480798 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4fd89a-3d15-4886-ad35-318136b7a519-secret-volume\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.493536 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.501485 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-proxy-tls\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.507598 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.525763 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.539134 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b24fb26c-0cf1-4094-8c3d-d274fee59a7a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-8sg2r\" (UID: \"b24fb26c-0cf1-4094-8c3d-d274fee59a7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.547204 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.566787 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.570881 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.571126 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.071088311 +0000 UTC m=+142.035753302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.571646 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.572156 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.072144833 +0000 UTC m=+142.036810024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.587193 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.600082 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2665bd86-15fa-4593-9d82-1e3753db401d-metrics-tls\") pod \"dns-operator-744455d44c-bdl2b\" (UID: \"2665bd86-15fa-4593-9d82-1e3753db401d\") " pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.606074 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.626923 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.646925 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.656393 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2ef0e062-1746-4c5e-9e1f-8f75282a9404-srv-cert\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.666895 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.672262 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.672421 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.172396966 +0000 UTC m=+142.137061967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.672771 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.673341 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.173309404 +0000 UTC m=+142.137974395 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.684900 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d67eea0c-6059-4e76-8bdf-0fb3c25e2717-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79hxc\" (UID: \"d67eea0c-6059-4e76-8bdf-0fb3c25e2717\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.686778 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.697311 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34cffbee-ca52-499c-a1cb-4f127b9d352f-serving-cert\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.707360 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.727310 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.747036 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.754044 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34cffbee-ca52-499c-a1cb-4f127b9d352f-config\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.767315 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.774505 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.774767 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.274735194 +0000 UTC m=+142.239400205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.776779 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.777306 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.277272993 +0000 UTC m=+142.241937994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.786634 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.807049 4889 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.828196 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.847501 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.866420 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.868285 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4fd89a-3d15-4886-ad35-318136b7a519-config-volume\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.878375 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.878578 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.378548208 +0000 UTC m=+142.343213219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.879065 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.879628 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.37961305 +0000 UTC m=+142.344278051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.886410 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.899760 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.933039 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.942702 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.946061 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.947874 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.953785 4889 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.953887 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c086d0f-4dda-4123-b2e9-fe95519e76ff-cert podName:7c086d0f-4dda-4123-b2e9-fe95519e76ff nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.453862522 +0000 UTC m=+142.418527503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c086d0f-4dda-4123-b2e9-fe95519e76ff-cert") pod "ingress-canary-x7hdg" (UID: "7c086d0f-4dda-4123-b2e9-fe95519e76ff") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.954275 4889 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.954316 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad046d9-2ae3-470e-9cde-bbed21290815-proxy-tls podName:7ad046d9-2ae3-470e-9cde-bbed21290815 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.454306856 +0000 UTC m=+142.418971847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/7ad046d9-2ae3-470e-9cde-bbed21290815-proxy-tls") pod "machine-config-operator-74547568cd-rvjx7" (UID: "7ad046d9-2ae3-470e-9cde-bbed21290815") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.955067 4889 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.955307 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b8e3750c-2800-4e2b-90b5-7fb1c06af232-config-volume podName:b8e3750c-2800-4e2b-90b5-7fb1c06af232 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.455246564 +0000 UTC m=+142.419911575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b8e3750c-2800-4e2b-90b5-7fb1c06af232-config-volume") pod "dns-default-9sgpt" (UID: "b8e3750c-2800-4e2b-90b5-7fb1c06af232") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.957157 4889 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.957254 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b8e3750c-2800-4e2b-90b5-7fb1c06af232-metrics-tls podName:b8e3750c-2800-4e2b-90b5-7fb1c06af232 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.457244356 +0000 UTC m=+142.421909347 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b8e3750c-2800-4e2b-90b5-7fb1c06af232-metrics-tls") pod "dns-default-9sgpt" (UID: "b8e3750c-2800-4e2b-90b5-7fb1c06af232") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.957347 4889 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.957508 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-images podName:7ad046d9-2ae3-470e-9cde-bbed21290815 nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.457468673 +0000 UTC m=+142.422133794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-images") pod "machine-config-operator-74547568cd-rvjx7" (UID: "7ad046d9-2ae3-470e-9cde-bbed21290815") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.966129 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.980418 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.980641 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.480610927 +0000 UTC m=+142.445275918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.981092 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:35 crc kubenswrapper[4889]: E0219 00:08:35.981709 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.481696881 +0000 UTC m=+142.446361872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:35 crc kubenswrapper[4889]: I0219 00:08:35.987341 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.006300 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.027326 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.065795 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4t6c\" (UniqueName: \"kubernetes.io/projected/0041b2eb-f313-4b10-9f6c-f4431ddc93f5-kube-api-access-p4t6c\") pod \"apiserver-76f77b778f-zmbvn\" (UID: \"0041b2eb-f313-4b10-9f6c-f4431ddc93f5\") " pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.082835 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.083017 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.582986386 +0000 UTC m=+142.547651377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.083167 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.083542 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.583534463 +0000 UTC m=+142.548199454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.083609 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s24j2\" (UniqueName: \"kubernetes.io/projected/0ad552dd-0ce7-421e-a8ed-346c5e494d89-kube-api-access-s24j2\") pod \"machine-approver-56656f9798-d6xlk\" (UID: \"0ad552dd-0ce7-421e-a8ed-346c5e494d89\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.101967 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdw94\" (UniqueName: \"kubernetes.io/projected/c7c5e287-0d1b-4299-830f-9a32db4a5486-kube-api-access-bdw94\") pod \"apiserver-7bbb656c7d-x9kbz\" (UID: \"c7c5e287-0d1b-4299-830f-9a32db4a5486\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.123643 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqklg\" (UniqueName: \"kubernetes.io/projected/953c0d0a-1dec-4045-af86-0c6547b3a336-kube-api-access-zqklg\") pod \"machine-api-operator-5694c8668f-pmgn8\" (UID: \"953c0d0a-1dec-4045-af86-0c6547b3a336\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.142133 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcmpr\" (UniqueName: \"kubernetes.io/projected/316969ac-051e-4145-9536-e65bc7103089-kube-api-access-fcmpr\") pod \"route-controller-manager-6576b87f9c-w52zn\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.165985 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4b7z\" (UniqueName: \"kubernetes.io/projected/d772bc14-d22b-4af2-b640-f6a633e2b8b9-kube-api-access-g4b7z\") pod \"authentication-operator-69f744f599-55d27\" (UID: \"d772bc14-d22b-4af2-b640-f6a633e2b8b9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.171561 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.183792 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24b9h\" (UniqueName: \"kubernetes.io/projected/dff75125-78d5-4ee7-8a76-64087e781dd3-kube-api-access-24b9h\") pod \"controller-manager-879f6c89f-p8bs5\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.185053 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.185385 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.685349905 +0000 UTC m=+142.650014926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.185706 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.186232 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.686181291 +0000 UTC m=+142.650846292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.207259 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ztz9\" (UniqueName: \"kubernetes.io/projected/d58c7e7a-3804-4b7a-bfb0-e79b50d92710-kube-api-access-8ztz9\") pod \"console-f9d7485db-ft7cw\" (UID: \"d58c7e7a-3804-4b7a-bfb0-e79b50d92710\") " pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.218913 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.227067 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.227262 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.229150 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnmlq\" (UniqueName: \"kubernetes.io/projected/4cfbfb11-99d4-42aa-bc2f-d4263718d5c9-kube-api-access-mnmlq\") pod \"openshift-controller-manager-operator-756b6f6bc6-ztf8h\" (UID: \"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.238058 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.247494 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.270142 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.288244 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.288791 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.288993 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.788953102 +0000 UTC m=+142.753618093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.289229 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.289493 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.289821 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.789803758 +0000 UTC m=+142.754468769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.307749 4889 request.go:700] Waited for 1.936443584s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.310708 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.317328 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.327141 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.346917 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.381270 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.393392 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.393931 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.893917301 +0000 UTC m=+142.858582292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.394246 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.407752 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv79r\" (UniqueName: \"kubernetes.io/projected/956660a2-d159-418c-933b-147711dd34b9-kube-api-access-kv79r\") pod \"openshift-config-operator-7777fb866f-2qk2n\" (UID: \"956660a2-d159-418c-933b-147711dd34b9\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.426581 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkdsn\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-kube-api-access-bkdsn\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.442153 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xzx6\" (UniqueName: \"kubernetes.io/projected/5a539b86-26c9-4270-aa81-6ca18af85223-kube-api-access-7xzx6\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.461076 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-55d27"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.462193 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr8jz\" (UniqueName: \"kubernetes.io/projected/cf74118f-1818-4680-98a6-a32fe3cc2725-kube-api-access-rr8jz\") pod \"console-operator-58897d9998-658hr\" (UID: \"cf74118f-1818-4680-98a6-a32fe3cc2725\") " pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.493912 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-zmbvn"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.496379 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ad046d9-2ae3-470e-9cde-bbed21290815-proxy-tls\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.496583 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.496646 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e3750c-2800-4e2b-90b5-7fb1c06af232-config-volume\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.496784 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8e3750c-2800-4e2b-90b5-7fb1c06af232-metrics-tls\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.496818 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-images\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.497066 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c086d0f-4dda-4123-b2e9-fe95519e76ff-cert\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.498028 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrrvt\" (UniqueName: \"kubernetes.io/projected/5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5-kube-api-access-mrrvt\") pod \"openshift-apiserver-operator-796bbdcf4f-mvjfx\" (UID: \"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.499082 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:36.999055295 +0000 UTC m=+142.963720286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.499635 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.502391 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e3750c-2800-4e2b-90b5-7fb1c06af232-config-volume\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.503592 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7ad046d9-2ae3-470e-9cde-bbed21290815-images\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.504005 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.509872 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5a539b86-26c9-4270-aa81-6ca18af85223-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-txkq5\" (UID: \"5a539b86-26c9-4270-aa81-6ca18af85223\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.517310 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ad046d9-2ae3-470e-9cde-bbed21290815-proxy-tls\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.512095 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b8e3750c-2800-4e2b-90b5-7fb1c06af232-metrics-tls\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.527887 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" event={"ID":"0ad552dd-0ce7-421e-a8ed-346c5e494d89","Type":"ContainerStarted","Data":"753032516a784c71d7f7d1bae2552fecc449dd976dccbda1facb1c74107c1e17"} Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.528664 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwpp\" (UniqueName: \"kubernetes.io/projected/0bdda273-5648-4fa5-868c-48142d764012-kube-api-access-gwwpp\") pod \"image-pruner-29524320-vf894\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.535476 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c086d0f-4dda-4123-b2e9-fe95519e76ff-cert\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.537252 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" event={"ID":"d772bc14-d22b-4af2-b640-f6a633e2b8b9","Type":"ContainerStarted","Data":"e65a2e0bc2534f4bf450a32e16fec19653db6c7b91a9d0a96e4896ebf826d95a"} Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.544086 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww7x6\" (UniqueName: \"kubernetes.io/projected/61f5ec49-57ce-4a4a-86b6-fed43a63c82e-kube-api-access-ww7x6\") pod \"downloads-7954f5f757-fp64s\" (UID: \"61f5ec49-57ce-4a4a-86b6-fed43a63c82e\") " pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.565751 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-bound-sa-token\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.588171 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65h2h\" (UniqueName: \"kubernetes.io/projected/f3d1db96-e34d-4c20-9556-edec9e27858c-kube-api-access-65h2h\") pod \"oauth-openshift-558db77b4-xdn9r\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.600820 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.601370 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.101316571 +0000 UTC m=+143.065981562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.604885 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drrjd\" (UniqueName: \"kubernetes.io/projected/2665bd86-15fa-4593-9d82-1e3753db401d-kube-api-access-drrjd\") pod \"dns-operator-744455d44c-bdl2b\" (UID: \"2665bd86-15fa-4593-9d82-1e3753db401d\") " pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.610870 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.626287 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxbs\" (UniqueName: \"kubernetes.io/projected/36b73e42-e2e2-4e2a-8df3-879f78fba75f-kube-api-access-xwxbs\") pod \"kube-storage-version-migrator-operator-b67b599dd-qb2fs\" (UID: \"36b73e42-e2e2-4e2a-8df3-879f78fba75f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.637090 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-ft7cw"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.644701 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.647668 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcbzq\" (UniqueName: \"kubernetes.io/projected/f7a4c945-2b4c-4b30-ad06-9158ce04018e-kube-api-access-hcbzq\") pod \"marketplace-operator-79b997595-sfp24\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.648054 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.670735 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvff\" (UniqueName: \"kubernetes.io/projected/e039dfef-e576-4bf7-8d87-9569ae038395-kube-api-access-xvvff\") pod \"multus-admission-controller-857f4d67dd-mp2px\" (UID: \"e039dfef-e576-4bf7-8d87-9569ae038395\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.699614 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljs6n\" (UniqueName: \"kubernetes.io/projected/b8e3750c-2800-4e2b-90b5-7fb1c06af232-kube-api-access-ljs6n\") pod \"dns-default-9sgpt\" (UID: \"b8e3750c-2800-4e2b-90b5-7fb1c06af232\") " pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.700468 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.702715 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.703007 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p8bs5"] Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.703313 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.203291087 +0000 UTC m=+143.167956078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.710123 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.715131 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.716828 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdwll\" (UniqueName: \"kubernetes.io/projected/2ef0e062-1746-4c5e-9e1f-8f75282a9404-kube-api-access-wdwll\") pod \"olm-operator-6b444d44fb-m47cx\" (UID: \"2ef0e062-1746-4c5e-9e1f-8f75282a9404\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.722842 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9btx\" (UniqueName: \"kubernetes.io/projected/d47bfc68-8f0f-4717-be7a-fccf98897cda-kube-api-access-n9btx\") pod \"router-default-5444994796-xf92p\" (UID: \"d47bfc68-8f0f-4717-be7a-fccf98897cda\") " pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.723229 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.728339 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.744785 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqbd\" (UniqueName: \"kubernetes.io/projected/a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d-kube-api-access-7fqbd\") pod \"service-ca-9c57cc56f-mkqxj\" (UID: \"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d\") " pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.762345 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.773132 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pmgn8"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.780138 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8297g\" (UniqueName: \"kubernetes.io/projected/8310f440-1e69-4e39-8a1d-2f02c1fb51c0-kube-api-access-8297g\") pod \"machine-config-controller-84d6567774-h6mnc\" (UID: \"8310f440-1e69-4e39-8a1d-2f02c1fb51c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.780789 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.790703 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.796052 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bv6w6\" (UID: \"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.803437 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.804012 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.303995414 +0000 UTC m=+143.268660405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.819570 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.821739 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfw6c\" (UniqueName: \"kubernetes.io/projected/d67eea0c-6059-4e76-8bdf-0fb3c25e2717-kube-api-access-vfw6c\") pod \"control-plane-machine-set-operator-78cbb6b69f-79hxc\" (UID: \"d67eea0c-6059-4e76-8bdf-0fb3c25e2717\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.842690 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdmqn\" (UniqueName: \"kubernetes.io/projected/7ad046d9-2ae3-470e-9cde-bbed21290815-kube-api-access-mdmqn\") pod \"machine-config-operator-74547568cd-rvjx7\" (UID: \"7ad046d9-2ae3-470e-9cde-bbed21290815\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.844824 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:36 crc kubenswrapper[4889]: W0219 00:08:36.850370 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod953c0d0a_1dec_4045_af86_0c6547b3a336.slice/crio-7593d48aff9533f5965ba2d982ed67384e43efd4d5150bddcb10b92a9b037bf3 WatchSource:0}: Error finding container 7593d48aff9533f5965ba2d982ed67384e43efd4d5150bddcb10b92a9b037bf3: Status 404 returned error can't find the container with id 7593d48aff9533f5965ba2d982ed67384e43efd4d5150bddcb10b92a9b037bf3 Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.851499 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h"] Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.852022 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65mvx\" (UniqueName: \"kubernetes.io/projected/f5d8865d-92b4-4a08-a6f6-f10639d9b709-kube-api-access-65mvx\") pod \"machine-config-server-jth4m\" (UID: \"f5d8865d-92b4-4a08-a6f6-f10639d9b709\") " pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.867732 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdht\" (UniqueName: \"kubernetes.io/projected/23cfc2c7-bd77-4361-b718-c0b6f13d8475-kube-api-access-czdht\") pod \"etcd-operator-b45778765-stmrq\" (UID: \"23cfc2c7-bd77-4361-b718-c0b6f13d8475\") " pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.877957 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jth4m" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.884289 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7422e6b-7aee-4c6f-95e2-233d4040f239-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-pjqvk\" (UID: \"f7422e6b-7aee-4c6f-95e2-233d4040f239\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.886335 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" Feb 19 00:08:36 crc kubenswrapper[4889]: W0219 00:08:36.887818 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc7c5e287_0d1b_4299_830f_9a32db4a5486.slice/crio-926b1e6dca420f8d00d10787c4b8f64ae212a07bd18969389579f29e801dc02b WatchSource:0}: Error finding container 926b1e6dca420f8d00d10787c4b8f64ae212a07bd18969389579f29e801dc02b: Status 404 returned error can't find the container with id 926b1e6dca420f8d00d10787c4b8f64ae212a07bd18969389579f29e801dc02b Feb 19 00:08:36 crc kubenswrapper[4889]: W0219 00:08:36.889658 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cfbfb11_99d4_42aa_bc2f_d4263718d5c9.slice/crio-25d6021d94e414ce23b4d952afe67ec8356f9aead4fafbf27208b370c2bb3fce WatchSource:0}: Error finding container 25d6021d94e414ce23b4d952afe67ec8356f9aead4fafbf27208b370c2bb3fce: Status 404 returned error can't find the container with id 25d6021d94e414ce23b4d952afe67ec8356f9aead4fafbf27208b370c2bb3fce Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.895739 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.902347 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.907400 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:36 crc kubenswrapper[4889]: E0219 00:08:36.907950 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.407935692 +0000 UTC m=+143.372600673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.910745 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhr6f\" (UniqueName: \"kubernetes.io/projected/7c086d0f-4dda-4123-b2e9-fe95519e76ff-kube-api-access-nhr6f\") pod \"ingress-canary-x7hdg\" (UID: \"7c086d0f-4dda-4123-b2e9-fe95519e76ff\") " pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.928195 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lghrw\" (UniqueName: \"kubernetes.io/projected/34cffbee-ca52-499c-a1cb-4f127b9d352f-kube-api-access-lghrw\") pod \"service-ca-operator-777779d784-k8sg2\" (UID: \"34cffbee-ca52-499c-a1cb-4f127b9d352f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.928697 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.947380 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvmg\" (UniqueName: \"kubernetes.io/projected/356a8f86-9fc9-4c30-accb-4866e29407fd-kube-api-access-8zvmg\") pod \"cluster-samples-operator-665b6dd947-grf28\" (UID: \"356a8f86-9fc9-4c30-accb-4866e29407fd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.952507 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.962545 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.967378 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/951c61d4-aa85-4db8-8ddf-f7889e8f85ba-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7jsmd\" (UID: \"951c61d4-aa85-4db8-8ddf-f7889e8f85ba\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.971153 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" Feb 19 00:08:36 crc kubenswrapper[4889]: I0219 00:08:36.989845 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rgjh\" (UniqueName: \"kubernetes.io/projected/ac4fd89a-3d15-4886-ad35-318136b7a519-kube-api-access-6rgjh\") pod \"collect-profiles-29524320-7s7mv\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.001524 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.005701 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbjv\" (UniqueName: \"kubernetes.io/projected/851d551b-ead7-4bd9-8d0a-88227edc9aad-kube-api-access-ktbjv\") pod \"catalog-operator-68c6474976-8w9hf\" (UID: \"851d551b-ead7-4bd9-8d0a-88227edc9aad\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.008390 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.008868 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.508831145 +0000 UTC m=+143.473496136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.015948 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.031754 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktwhz\" (UniqueName: \"kubernetes.io/projected/4e55dbef-89d4-4f2f-a4db-714290b34dad-kube-api-access-ktwhz\") pod \"packageserver-d55dfcdfc-86j7r\" (UID: \"4e55dbef-89d4-4f2f-a4db-714290b34dad\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.031825 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x7hdg" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.051648 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-658hr"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.059169 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4xv\" (UniqueName: \"kubernetes.io/projected/f8ed1e40-fb4c-4a2c-a713-d42036ee7138-kube-api-access-ng4xv\") pod \"csi-hostpathplugin-x66jj\" (UID: \"f8ed1e40-fb4c-4a2c-a713-d42036ee7138\") " pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.070094 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7dc128cd-807d-4a60-9629-be4dbf53cf9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.088939 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hbfp\" (UniqueName: \"kubernetes.io/projected/b24fb26c-0cf1-4094-8c3d-d274fee59a7a-kube-api-access-9hbfp\") pod \"package-server-manager-789f6589d5-8sg2r\" (UID: \"b24fb26c-0cf1-4094-8c3d-d274fee59a7a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.093265 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bdl2b"] Feb 19 00:08:37 crc kubenswrapper[4889]: W0219 00:08:37.097609 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd47bfc68_8f0f_4717_be7a_fccf98897cda.slice/crio-dc0b6b08ec067e3a0669029ee686eb68bc3e50a7eb1d83790b3939dae8cf8780 WatchSource:0}: Error finding container dc0b6b08ec067e3a0669029ee686eb68bc3e50a7eb1d83790b3939dae8cf8780: Status 404 returned error can't find the container with id dc0b6b08ec067e3a0669029ee686eb68bc3e50a7eb1d83790b3939dae8cf8780 Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.107771 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm4hp\" (UniqueName: \"kubernetes.io/projected/7dc128cd-807d-4a60-9629-be4dbf53cf9b-kube-api-access-bm4hp\") pod \"ingress-operator-5b745b69d9-wzzvn\" (UID: \"7dc128cd-807d-4a60-9629-be4dbf53cf9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.111174 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.111654 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.611634847 +0000 UTC m=+143.576299838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.124560 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.124621 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzccx\" (UniqueName: \"kubernetes.io/projected/cb574028-4d71-408f-9fe7-f2497636fffd-kube-api-access-jzccx\") pod \"migrator-59844c95c7-h6lzz\" (UID: \"cb574028-4d71-408f-9fe7-f2497636fffd\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.130834 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.138438 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.139985 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.162890 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.162947 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.169372 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.186649 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.212555 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.213499 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.71345262 +0000 UTC m=+143.678117621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.215638 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.221857 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.241616 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.284789 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fp64s"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.292479 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.316472 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.316945 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.816928312 +0000 UTC m=+143.781593303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.345440 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xdn9r"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.410799 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.417245 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.417696 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:37.917675441 +0000 UTC m=+143.882340432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.421525 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9sgpt"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.426709 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-vf894"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.464324 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mp2px"] Feb 19 00:08:37 crc kubenswrapper[4889]: W0219 00:08:37.503111 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode039dfef_e576_4bf7_8d87_9569ae038395.slice/crio-6b5e0c1679eb2fd3ae3d71fddcd255dfbf3d3b75ad930cf8d8e282632114d14e WatchSource:0}: Error finding container 6b5e0c1679eb2fd3ae3d71fddcd255dfbf3d3b75ad930cf8d8e282632114d14e: Status 404 returned error can't find the container with id 6b5e0c1679eb2fd3ae3d71fddcd255dfbf3d3b75ad930cf8d8e282632114d14e Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.515584 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mkqxj"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.522390 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.523039 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.023019601 +0000 UTC m=+143.987684592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.526977 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfp24"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.576848 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jth4m" event={"ID":"f5d8865d-92b4-4a08-a6f6-f10639d9b709","Type":"ContainerStarted","Data":"0d4f574516f1c93f3ee7eaac9825b640859858693c27f8d9712bdd3cfc3d383d"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.581428 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" event={"ID":"c7c5e287-0d1b-4299-830f-9a32db4a5486","Type":"ContainerStarted","Data":"926b1e6dca420f8d00d10787c4b8f64ae212a07bd18969389579f29e801dc02b"} Feb 19 00:08:37 crc kubenswrapper[4889]: W0219 00:08:37.581596 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a539b86_26c9_4270_aa81_6ca18af85223.slice/crio-ddb6e168200884a27174ba9afdc491479e386523ac651ca026f8ed257ef23ad2 WatchSource:0}: Error finding container ddb6e168200884a27174ba9afdc491479e386523ac651ca026f8ed257ef23ad2: Status 404 returned error can't find the container with id ddb6e168200884a27174ba9afdc491479e386523ac651ca026f8ed257ef23ad2 Feb 19 00:08:37 crc kubenswrapper[4889]: W0219 00:08:37.582769 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bdda273_5648_4fa5_868c_48142d764012.slice/crio-57543aa6d76858e3278ccfaaef87337535a02db3197525a9a48e71b3276f3f0a WatchSource:0}: Error finding container 57543aa6d76858e3278ccfaaef87337535a02db3197525a9a48e71b3276f3f0a: Status 404 returned error can't find the container with id 57543aa6d76858e3278ccfaaef87337535a02db3197525a9a48e71b3276f3f0a Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.583546 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.587818 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" event={"ID":"0ad552dd-0ce7-421e-a8ed-346c5e494d89","Type":"ContainerStarted","Data":"8a980f95a04c3c0bc5acb8e21add2ae092f860142450e167061718895b54f197"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.597696 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" event={"ID":"f3d1db96-e34d-4c20-9556-edec9e27858c","Type":"ContainerStarted","Data":"77f619253b54c4214dae9fb45937fea91bf72595805ea565f92cd5dab4bda0cb"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.623485 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.623831 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.123814522 +0000 UTC m=+144.088479513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.633463 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" event={"ID":"e039dfef-e576-4bf7-8d87-9569ae038395","Type":"ContainerStarted","Data":"6b5e0c1679eb2fd3ae3d71fddcd255dfbf3d3b75ad930cf8d8e282632114d14e"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.635766 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" event={"ID":"956660a2-d159-418c-933b-147711dd34b9","Type":"ContainerStarted","Data":"827c0a346b36dd099b9aed482b577338eec03d8978d28896c874354ba5cf7b5e"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.650858 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.656598 4889 generic.go:334] "Generic (PLEG): container finished" podID="0041b2eb-f313-4b10-9f6c-f4431ddc93f5" containerID="b71c6eb0788621c7833ebbf8668c3395a5368ad245d639d89e49a9509caeb820" exitCode=0 Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.656668 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" event={"ID":"0041b2eb-f313-4b10-9f6c-f4431ddc93f5","Type":"ContainerDied","Data":"b71c6eb0788621c7833ebbf8668c3395a5368ad245d639d89e49a9509caeb820"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.656693 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" event={"ID":"0041b2eb-f313-4b10-9f6c-f4431ddc93f5","Type":"ContainerStarted","Data":"84997da5f6c5b4b88ba0cb963deccbb22161361773fc4218e5a074edb8a427d1"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.662082 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" event={"ID":"d772bc14-d22b-4af2-b640-f6a633e2b8b9","Type":"ContainerStarted","Data":"6efd6e994aa098c3518688b9e8cdf8ce4ea6ce1e4052b165d9c5a99ad89774cd"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.663420 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xf92p" event={"ID":"d47bfc68-8f0f-4717-be7a-fccf98897cda","Type":"ContainerStarted","Data":"dc0b6b08ec067e3a0669029ee686eb68bc3e50a7eb1d83790b3939dae8cf8780"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.665212 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" event={"ID":"953c0d0a-1dec-4045-af86-0c6547b3a336","Type":"ContainerStarted","Data":"7593d48aff9533f5965ba2d982ed67384e43efd4d5150bddcb10b92a9b037bf3"} Feb 19 00:08:37 crc kubenswrapper[4889]: W0219 00:08:37.671930 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8310f440_1e69_4e39_8a1d_2f02c1fb51c0.slice/crio-99353fa3b580d1e86854ca45e83701cde80929474e8aeec61c1ac10c4afd54c1 WatchSource:0}: Error finding container 99353fa3b580d1e86854ca45e83701cde80929474e8aeec61c1ac10c4afd54c1: Status 404 returned error can't find the container with id 99353fa3b580d1e86854ca45e83701cde80929474e8aeec61c1ac10c4afd54c1 Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.672605 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" event={"ID":"dff75125-78d5-4ee7-8a76-64087e781dd3","Type":"ContainerStarted","Data":"7926a52fa2d460f625d34d51af449d21237300b7185010c7fd73f70d9c5ce4c1"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.672676 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" event={"ID":"dff75125-78d5-4ee7-8a76-64087e781dd3","Type":"ContainerStarted","Data":"00e79541cbe82d0d789966c8bbb7ec3aba8548c9c9b9ecd325ddd9b496deeba3"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.672899 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:37 crc kubenswrapper[4889]: W0219 00:08:37.678532 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8e3750c_2800_4e2b_90b5_7fb1c06af232.slice/crio-aa3b49deefd7e88b4c17d767eeb6e954a36399ecc586ba6fba591f70c9ef3d19 WatchSource:0}: Error finding container aa3b49deefd7e88b4c17d767eeb6e954a36399ecc586ba6fba591f70c9ef3d19: Status 404 returned error can't find the container with id aa3b49deefd7e88b4c17d767eeb6e954a36399ecc586ba6fba591f70c9ef3d19 Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.688343 4889 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-p8bs5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.688406 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" podUID="dff75125-78d5-4ee7-8a76-64087e781dd3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.688716 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-658hr" event={"ID":"cf74118f-1818-4680-98a6-a32fe3cc2725","Type":"ContainerStarted","Data":"11031630dd0cc90a075885b1644cbfdbb5349aed21646b0a6dfba66a0b6d2bdc"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.705791 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fp64s" event={"ID":"61f5ec49-57ce-4a4a-86b6-fed43a63c82e","Type":"ContainerStarted","Data":"4bf54dba512953a9ff1a47ce3338ad88f94927e168874f94a585f743fb533ae1"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.710848 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" event={"ID":"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9","Type":"ContainerStarted","Data":"25d6021d94e414ce23b4d952afe67ec8356f9aead4fafbf27208b370c2bb3fce"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.712091 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" event={"ID":"2665bd86-15fa-4593-9d82-1e3753db401d","Type":"ContainerStarted","Data":"df076d71aca401522ad4281d98306a3d8ff849e1c2ec8edfec222eecb008139a"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.718787 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.721929 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" event={"ID":"316969ac-051e-4145-9536-e65bc7103089","Type":"ContainerStarted","Data":"2513a276bc7ba5b31b0a8e2fd70cbe32c52ef1e404225f3bb7c6e0ea309dd0c1"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.724378 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ft7cw" event={"ID":"d58c7e7a-3804-4b7a-bfb0-e79b50d92710","Type":"ContainerStarted","Data":"936e1cf15c72e9bd794a19f719c7fe3d9774e252826eee3c1d87f2c9f8cfbca6"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.724431 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-ft7cw" event={"ID":"d58c7e7a-3804-4b7a-bfb0-e79b50d92710","Type":"ContainerStarted","Data":"9a063e2592e54eaa53cfa3aab49160aec2623399e9052471479a31d79064eb39"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.725518 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.726913 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.226868203 +0000 UTC m=+144.191533284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.727772 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" event={"ID":"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5","Type":"ContainerStarted","Data":"a41e72c0e8bf24674bfe2a7563cb9f0ea45dfb956a2a41806867656c0bf2e9c8"} Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.781797 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.781860 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.827195 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.831606 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.331581703 +0000 UTC m=+144.296246694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.858297 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2"] Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.928969 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:37 crc kubenswrapper[4889]: E0219 00:08:37.929343 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.429327479 +0000 UTC m=+144.393992470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:37 crc kubenswrapper[4889]: I0219 00:08:37.977007 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc"] Feb 19 00:08:38 crc kubenswrapper[4889]: W0219 00:08:38.004148 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34cffbee_ca52_499c_a1cb_4f127b9d352f.slice/crio-32a3944da192b874a7f2be6282188f0ae0ec759da360ba6ee8ca828e895b6d6e WatchSource:0}: Error finding container 32a3944da192b874a7f2be6282188f0ae0ec759da360ba6ee8ca828e895b6d6e: Status 404 returned error can't find the container with id 32a3944da192b874a7f2be6282188f0ae0ec759da360ba6ee8ca828e895b6d6e Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.030088 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.030645 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.530625525 +0000 UTC m=+144.495290506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.035904 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-55d27" podStartSLOduration=123.035877297 podStartE2EDuration="2m3.035877297s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:37.998858245 +0000 UTC m=+143.963523246" watchObservedRunningTime="2026-02-19 00:08:38.035877297 +0000 UTC m=+144.000542288" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.078592 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.094442 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.133406 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.133921 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.633902082 +0000 UTC m=+144.598567073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: W0219 00:08:38.133926 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd67eea0c_6059_4e76_8bdf_0fb3c25e2717.slice/crio-de69bf536268ae246bf91a4e0682fee49d8cc5f50c8411da1c0753645a2dfd2a WatchSource:0}: Error finding container de69bf536268ae246bf91a4e0682fee49d8cc5f50c8411da1c0753645a2dfd2a: Status 404 returned error can't find the container with id de69bf536268ae246bf91a4e0682fee49d8cc5f50c8411da1c0753645a2dfd2a Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.235236 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.236087 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.736051085 +0000 UTC m=+144.700716076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.311534 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x7hdg"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.337385 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.337694 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.83768266 +0000 UTC m=+144.802347651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.352666 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7"] Feb 19 00:08:38 crc kubenswrapper[4889]: W0219 00:08:38.402279 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c086d0f_4dda_4123_b2e9_fe95519e76ff.slice/crio-8b3450070919089b564b961cb138c2da211253828372189698c8242a90ed6057 WatchSource:0}: Error finding container 8b3450070919089b564b961cb138c2da211253828372189698c8242a90ed6057: Status 404 returned error can't find the container with id 8b3450070919089b564b961cb138c2da211253828372189698c8242a90ed6057 Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.408595 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" podStartSLOduration=122.408575798 podStartE2EDuration="2m2.408575798s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:38.375323502 +0000 UTC m=+144.339988493" watchObservedRunningTime="2026-02-19 00:08:38.408575798 +0000 UTC m=+144.373240789" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.411105 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.439132 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.440756 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:38.94071855 +0000 UTC m=+144.905383541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.445727 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x66jj"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.523840 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.542983 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.543891 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.043876803 +0000 UTC m=+145.008541794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.610249 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-stmrq"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.692966 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.696410 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.196378278 +0000 UTC m=+145.161043269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.699417 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.702422 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.202404885 +0000 UTC m=+145.167069876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.727962 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-ft7cw" podStartSLOduration=123.727940333 podStartE2EDuration="2m3.727940333s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:38.72558237 +0000 UTC m=+144.690247351" watchObservedRunningTime="2026-02-19 00:08:38.727940333 +0000 UTC m=+144.692605324" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.801287 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.801656 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.301519593 +0000 UTC m=+145.266184584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.801919 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.802290 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.302275526 +0000 UTC m=+145.266940517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.818905 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" event={"ID":"316969ac-051e-4145-9536-e65bc7103089","Type":"ContainerStarted","Data":"ce4fc427c40c4bbfcfccd0edc84c21d02795838442886226b32392cf9bc144f4"} Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.819464 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.822369 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x7hdg" event={"ID":"7c086d0f-4dda-4123-b2e9-fe95519e76ff","Type":"ContainerStarted","Data":"8b3450070919089b564b961cb138c2da211253828372189698c8242a90ed6057"} Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.829928 4889 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w52zn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.829996 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" podUID="316969ac-051e-4145-9536-e65bc7103089" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.847844 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.885814 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fp64s" event={"ID":"61f5ec49-57ce-4a4a-86b6-fed43a63c82e","Type":"ContainerStarted","Data":"86435c9d5ed005ac304fa6f316d9fa2faac2df357752e35bdfa3b66006e06fbf"} Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.887310 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.904742 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:38 crc kubenswrapper[4889]: E0219 00:08:38.906073 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.406033928 +0000 UTC m=+145.370698909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.906437 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.906506 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.919434 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-vf894" event={"ID":"0bdda273-5648-4fa5-868c-48142d764012","Type":"ContainerStarted","Data":"57543aa6d76858e3278ccfaaef87337535a02db3197525a9a48e71b3276f3f0a"} Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.937112 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" event={"ID":"4e55dbef-89d4-4f2f-a4db-714290b34dad","Type":"ContainerStarted","Data":"afde329a6f9ef6cbb506afa90e49c2eebd7f41257250658c82eb7405baf7c194"} Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.943525 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" event={"ID":"951c61d4-aa85-4db8-8ddf-f7889e8f85ba","Type":"ContainerStarted","Data":"bebc35b08694a338b12f3964a8e7778f01b2c752468d20f4585ccabf568248e1"} Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.951012 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r"] Feb 19 00:08:38 crc kubenswrapper[4889]: I0219 00:08:38.976063 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn"] Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.007733 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.008507 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.50848781 +0000 UTC m=+145.473152801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.035615 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" event={"ID":"34cffbee-ca52-499c-a1cb-4f127b9d352f","Type":"ContainerStarted","Data":"32a3944da192b874a7f2be6282188f0ae0ec759da360ba6ee8ca828e895b6d6e"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.044838 4889 generic.go:334] "Generic (PLEG): container finished" podID="c7c5e287-0d1b-4299-830f-9a32db4a5486" containerID="3c7b3c01d8955091681947ee0ae368973af38595c317494a1a1b040c24fa1d92" exitCode=0 Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.045330 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" event={"ID":"c7c5e287-0d1b-4299-830f-9a32db4a5486","Type":"ContainerDied","Data":"3c7b3c01d8955091681947ee0ae368973af38595c317494a1a1b040c24fa1d92"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.052979 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk"] Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.062740 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" event={"ID":"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c","Type":"ContainerStarted","Data":"bb8f306bc89dcb15b7679a18135005fe169ebc2fcc2e049fca0a07c6ddcb7dc2"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.066781 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz"] Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.068363 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.072056 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fp64s" podStartSLOduration=124.07202807 podStartE2EDuration="2m4.07202807s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.060391041 +0000 UTC m=+145.025056032" watchObservedRunningTime="2026-02-19 00:08:39.07202807 +0000 UTC m=+145.036693081" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.074453 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" event={"ID":"953c0d0a-1dec-4045-af86-0c6547b3a336","Type":"ContainerStarted","Data":"c23bbaa32ff252cb5116b06c1e9bb9cabcfbbecb54015cb15ccd37fdcbf4e0e6"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.076519 4889 patch_prober.go:28] interesting pod/console-operator-58897d9998-658hr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.076579 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-658hr" podUID="cf74118f-1818-4680-98a6-a32fe3cc2725" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.078915 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" event={"ID":"f7a4c945-2b4c-4b30-ad06-9158ce04018e","Type":"ContainerStarted","Data":"8f76b004c90e6aea1851efb009a6f0aae167047d7f2d26d37f09702db4af622b"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.079675 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.080499 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" event={"ID":"7ad046d9-2ae3-470e-9cde-bbed21290815","Type":"ContainerStarted","Data":"150f60a6d1e5562454d3d0bff29f2abd63e8cb89a68c420178199fd6c7a144b4"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.081919 4889 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sfp24 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.081963 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.082113 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" event={"ID":"8310f440-1e69-4e39-8a1d-2f02c1fb51c0","Type":"ContainerStarted","Data":"99353fa3b580d1e86854ca45e83701cde80929474e8aeec61c1ac10c4afd54c1"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.091590 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" event={"ID":"4cfbfb11-99d4-42aa-bc2f-d4263718d5c9","Type":"ContainerStarted","Data":"ccb9e33b9b7bb41dcc057cca3a19b6b991ca970e8b0b355aefb8ebba5e9f0f71"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.104714 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf"] Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.123030 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.124576 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.62453834 +0000 UTC m=+145.589203341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.146002 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" event={"ID":"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d","Type":"ContainerStarted","Data":"2e7dca0c87698e3a043cfb6dae5b1120a7e09a8f5c92f67a831492bf5323118a"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.146076 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" event={"ID":"a5ad13a5-37cd-4dc9-8f49-247d1cad0b2d","Type":"ContainerStarted","Data":"4185260fd289165f9b91d2e24658c9241a67d25e5b13a1f300dabcc78ee0e541"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.172565 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" podStartSLOduration=123.172545502 podStartE2EDuration="2m3.172545502s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.171273063 +0000 UTC m=+145.135938074" watchObservedRunningTime="2026-02-19 00:08:39.172545502 +0000 UTC m=+145.137210493" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.177562 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" event={"ID":"36b73e42-e2e2-4e2a-8df3-879f78fba75f","Type":"ContainerStarted","Data":"f1e1e3648a939948c0a977fae4b3437758b474d75bd3284349dd716001c2d398"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.190952 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" event={"ID":"f8ed1e40-fb4c-4a2c-a713-d42036ee7138","Type":"ContainerStarted","Data":"591687cf650e904c3b855c0dbbbc35632e99c57f63e3970405f6f4c2f61d74f7"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.206838 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" event={"ID":"5a539b86-26c9-4270-aa81-6ca18af85223","Type":"ContainerStarted","Data":"00bbbdd3bedc3a5923a062d3add43a07634750e0c8b12156043757a24cf13e0e"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.206886 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" event={"ID":"5a539b86-26c9-4270-aa81-6ca18af85223","Type":"ContainerStarted","Data":"ddb6e168200884a27174ba9afdc491479e386523ac651ca026f8ed257ef23ad2"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.225012 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jth4m" event={"ID":"f5d8865d-92b4-4a08-a6f6-f10639d9b709","Type":"ContainerStarted","Data":"3a6489e171a8edf2bed957bf91edcd3a891159471a93cc5e0e32fcec5b9e9646"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.231847 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.233192 4889 csr.go:261] certificate signing request csr-5dldf is approved, waiting to be issued Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.235749 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.735734302 +0000 UTC m=+145.700399293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.237022 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29524320-vf894" podStartSLOduration=124.237005501 podStartE2EDuration="2m4.237005501s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.233397259 +0000 UTC m=+145.198062250" watchObservedRunningTime="2026-02-19 00:08:39.237005501 +0000 UTC m=+145.201670482" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.237336 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" event={"ID":"d67eea0c-6059-4e76-8bdf-0fb3c25e2717","Type":"ContainerStarted","Data":"de69bf536268ae246bf91a4e0682fee49d8cc5f50c8411da1c0753645a2dfd2a"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.244483 4889 csr.go:257] certificate signing request csr-5dldf is issued Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.255701 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" event={"ID":"2ef0e062-1746-4c5e-9e1f-8f75282a9404","Type":"ContainerStarted","Data":"564105665dc9b8b65acc53dceedfe85d5dd591973e99364eeecc8a7a41b4af4f"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.256275 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.264850 4889 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m47cx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.264956 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" podUID="2ef0e062-1746-4c5e-9e1f-8f75282a9404" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.265940 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" event={"ID":"0ad552dd-0ce7-421e-a8ed-346c5e494d89","Type":"ContainerStarted","Data":"bb60dcd047c20ab6b815b238c4cb2a43f4dc40023ae92a590405470bc4f65694"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.272469 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9sgpt" event={"ID":"b8e3750c-2800-4e2b-90b5-7fb1c06af232","Type":"ContainerStarted","Data":"aa3b49deefd7e88b4c17d767eeb6e954a36399ecc586ba6fba591f70c9ef3d19"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.288844 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" event={"ID":"23cfc2c7-bd77-4361-b718-c0b6f13d8475","Type":"ContainerStarted","Data":"993a0979252f677748e6640cedfe094d547c43c44b9689b23282da32fa566915"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.292945 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" event={"ID":"ac4fd89a-3d15-4886-ad35-318136b7a519","Type":"ContainerStarted","Data":"a4f53ad009a60a7f3271e6861446e19904d14607f8551e0a98f0c9e3d0d2a7f9"} Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.339665 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.341132 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.841104453 +0000 UTC m=+145.805769444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.350199 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" podStartSLOduration=124.350172523 podStartE2EDuration="2m4.350172523s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.31736121 +0000 UTC m=+145.282026211" watchObservedRunningTime="2026-02-19 00:08:39.350172523 +0000 UTC m=+145.314837514" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.352950 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xf92p" podStartSLOduration=123.352940389 podStartE2EDuration="2m3.352940389s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.348877363 +0000 UTC m=+145.313542364" watchObservedRunningTime="2026-02-19 00:08:39.352940389 +0000 UTC m=+145.317605380" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.353540 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.381707 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-d6xlk" podStartSLOduration=124.38148551 podStartE2EDuration="2m4.38148551s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.380681855 +0000 UTC m=+145.345346856" watchObservedRunningTime="2026-02-19 00:08:39.38148551 +0000 UTC m=+145.346150501" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.436341 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" podStartSLOduration=123.436316151 podStartE2EDuration="2m3.436316151s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.435052172 +0000 UTC m=+145.399717163" watchObservedRunningTime="2026-02-19 00:08:39.436316151 +0000 UTC m=+145.400981142" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.441436 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.441808 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:39.94179381 +0000 UTC m=+145.906458801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.466603 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" podStartSLOduration=123.466584765 podStartE2EDuration="2m3.466584765s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.465718888 +0000 UTC m=+145.430383889" watchObservedRunningTime="2026-02-19 00:08:39.466584765 +0000 UTC m=+145.431249756" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.507636 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" podStartSLOduration=123.507608332 podStartE2EDuration="2m3.507608332s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.50531236 +0000 UTC m=+145.469977351" watchObservedRunningTime="2026-02-19 00:08:39.507608332 +0000 UTC m=+145.472273323" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.541466 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" podStartSLOduration=123.541445905 podStartE2EDuration="2m3.541445905s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.541408144 +0000 UTC m=+145.506073145" watchObservedRunningTime="2026-02-19 00:08:39.541445905 +0000 UTC m=+145.506110896" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.542005 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.542361 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.042336953 +0000 UTC m=+146.007001944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.596352 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-txkq5" podStartSLOduration=124.596337029 podStartE2EDuration="2m4.596337029s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.594819672 +0000 UTC m=+145.559484653" watchObservedRunningTime="2026-02-19 00:08:39.596337029 +0000 UTC m=+145.561002020" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.649162 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.649556 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.149539471 +0000 UTC m=+146.114204462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.650912 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mkqxj" podStartSLOduration=123.650899623 podStartE2EDuration="2m3.650899623s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.649038575 +0000 UTC m=+145.613703566" watchObservedRunningTime="2026-02-19 00:08:39.650899623 +0000 UTC m=+145.615564614" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.751456 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.751835 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.251812807 +0000 UTC m=+146.216477798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.814081 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ztf8h" podStartSLOduration=124.814060248 podStartE2EDuration="2m4.814060248s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.812563282 +0000 UTC m=+145.777228283" watchObservedRunningTime="2026-02-19 00:08:39.814060248 +0000 UTC m=+145.778725239" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.847311 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.856321 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.856851 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.356797207 +0000 UTC m=+146.321462198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.857871 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.858265 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.885432 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jth4m" podStartSLOduration=5.88541498 podStartE2EDuration="5.88541498s" podCreationTimestamp="2026-02-19 00:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.884571854 +0000 UTC m=+145.849236855" watchObservedRunningTime="2026-02-19 00:08:39.88541498 +0000 UTC m=+145.850079971" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.925743 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-658hr" podStartSLOduration=124.925725043 podStartE2EDuration="2m4.925725043s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:39.923953049 +0000 UTC m=+145.888618040" watchObservedRunningTime="2026-02-19 00:08:39.925725043 +0000 UTC m=+145.890390034" Feb 19 00:08:39 crc kubenswrapper[4889]: I0219 00:08:39.961857 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:39 crc kubenswrapper[4889]: E0219 00:08:39.972391 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.472356893 +0000 UTC m=+146.437021884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.003812 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" podStartSLOduration=124.003778872 podStartE2EDuration="2m4.003778872s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.001657986 +0000 UTC m=+145.966322977" watchObservedRunningTime="2026-02-19 00:08:40.003778872 +0000 UTC m=+145.968443863" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.078786 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.079047 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.579036424 +0000 UTC m=+146.543701415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.188310 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.189032 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.689004917 +0000 UTC m=+146.653669908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.247643 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 00:03:39 +0000 UTC, rotation deadline is 2026-12-27 09:57:32.705194371 +0000 UTC Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.247690 4889 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7473h48m52.457507943s for next certificate rotation Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.290905 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.291871 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.79183842 +0000 UTC m=+146.756503581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.329525 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" event={"ID":"36b73e42-e2e2-4e2a-8df3-879f78fba75f","Type":"ContainerStarted","Data":"ae6ad05f17dc99666607cda25195094714ba3af2ebfb3cd7dfc955037cbb6985"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.346859 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pmgn8" event={"ID":"953c0d0a-1dec-4045-af86-0c6547b3a336","Type":"ContainerStarted","Data":"040edb3e46dc09a7340f523f8bf3c5418d10fdf3524ed5a192320913e7312844"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.364765 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" event={"ID":"7ad046d9-2ae3-470e-9cde-bbed21290815","Type":"ContainerStarted","Data":"d685a46c0727e63b3f5202342bdcd1bc34e9f45caba3069476284cd7599cf0ff"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.387415 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-qb2fs" podStartSLOduration=124.387393169 podStartE2EDuration="2m4.387393169s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.381722644 +0000 UTC m=+146.346387635" watchObservedRunningTime="2026-02-19 00:08:40.387393169 +0000 UTC m=+146.352058160" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.393050 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.399260 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.899205224 +0000 UTC m=+146.863870245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.401670 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" event={"ID":"ac4fd89a-3d15-4886-ad35-318136b7a519","Type":"ContainerStarted","Data":"932e029ffd683602da62f37daec1f4c3921a9070456e471d192aae035357164c"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.456801 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" event={"ID":"851d551b-ead7-4bd9-8d0a-88227edc9aad","Type":"ContainerStarted","Data":"6373c58697580af652465664d1b10d35c8c47f381ac56819bcf94cde9f30fc69"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.456859 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" event={"ID":"851d551b-ead7-4bd9-8d0a-88227edc9aad","Type":"ContainerStarted","Data":"1ad1c3e457faf661d00483ec8f78a7d24115a8d9c42ca39654964fccb08b8577"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.458280 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.466551 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" event={"ID":"2665bd86-15fa-4593-9d82-1e3753db401d","Type":"ContainerStarted","Data":"0067bd559896f57e3f3c8e7a8bfd4dc03db7b8d7a3b283a92146b4860076cd5e"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.483893 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" event={"ID":"f7422e6b-7aee-4c6f-95e2-233d4040f239","Type":"ContainerStarted","Data":"2933006e48630ca1680eb3881585113467461d02cf3d8979dda5426b0ad55b09"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.493246 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k8sg2" event={"ID":"34cffbee-ca52-499c-a1cb-4f127b9d352f","Type":"ContainerStarted","Data":"78591d217cb17af4f037044c4306b907296defcfd2b95df21eabfbabc93f2274"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.495326 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.497287 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:40.99727113 +0000 UTC m=+146.961936121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.504887 4889 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8w9hf container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.504940 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" podUID="851d551b-ead7-4bd9-8d0a-88227edc9aad" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.558546 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" podStartSLOduration=125.55851783 podStartE2EDuration="2m5.55851783s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.498981542 +0000 UTC m=+146.463646533" watchObservedRunningTime="2026-02-19 00:08:40.55851783 +0000 UTC m=+146.523182821" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.560842 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" podStartSLOduration=124.560828091 podStartE2EDuration="2m4.560828091s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.557602421 +0000 UTC m=+146.522267412" watchObservedRunningTime="2026-02-19 00:08:40.560828091 +0000 UTC m=+146.525493072" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.568149 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" event={"ID":"2ef0e062-1746-4c5e-9e1f-8f75282a9404","Type":"ContainerStarted","Data":"e74cd566ab69571467cbdf968a7c98ae273bbe9ac74980d166fd0489265e399a"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.569401 4889 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-m47cx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.569438 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" podUID="2ef0e062-1746-4c5e-9e1f-8f75282a9404" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.586045 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" event={"ID":"ce6c3f9e-5a7e-4dea-a104-829fcc97ff5c","Type":"ContainerStarted","Data":"6062abffe00438981d97963789fdbc6e0d7574cad80a0bba8c338e289289f7c5"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.598677 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.600952 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.100911108 +0000 UTC m=+147.065576249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.621947 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-vf894" event={"ID":"0bdda273-5648-4fa5-868c-48142d764012","Type":"ContainerStarted","Data":"1d8b9e2e5413b7609abd9500233b06d490454c34b4ea3e19cfd726c3d448a642"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.630586 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" podStartSLOduration=124.630550682 podStartE2EDuration="2m4.630550682s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.621409501 +0000 UTC m=+146.586074502" watchObservedRunningTime="2026-02-19 00:08:40.630550682 +0000 UTC m=+146.595215673" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.639822 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" event={"ID":"cb574028-4d71-408f-9fe7-f2497636fffd","Type":"ContainerStarted","Data":"1c1c14191954eb1916d94742ffbcea6924fd00c3417b59c8c959df324f28c66e"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.703407 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" event={"ID":"8310f440-1e69-4e39-8a1d-2f02c1fb51c0","Type":"ContainerStarted","Data":"12092c4d29857e90763bbcd3c52072c81481763f408d9e7975c0d0eea2f11fd0"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.703642 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" event={"ID":"8310f440-1e69-4e39-8a1d-2f02c1fb51c0","Type":"ContainerStarted","Data":"813a0a8009a46f8eb846387a6d8958fc1fa5a2b50d1f6dc8f2da14b0571257fb"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.706485 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.708511 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.208493668 +0000 UTC m=+147.173158849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.738148 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bv6w6" podStartSLOduration=124.738131332 podStartE2EDuration="2m4.738131332s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.670182765 +0000 UTC m=+146.634847756" watchObservedRunningTime="2026-02-19 00:08:40.738131332 +0000 UTC m=+146.702796323" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.739704 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h6mnc" podStartSLOduration=124.73969888 podStartE2EDuration="2m4.73969888s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.736650597 +0000 UTC m=+146.701315598" watchObservedRunningTime="2026-02-19 00:08:40.73969888 +0000 UTC m=+146.704363861" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.752972 4889 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xdn9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" start-of-body= Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.753423 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": dial tcp 10.217.0.20:6443: connect: connection refused" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.807420 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.809490 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.309461243 +0000 UTC m=+147.274126234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.819720 4889 generic.go:334] "Generic (PLEG): container finished" podID="956660a2-d159-418c-933b-147711dd34b9" containerID="ea2b8c515d72f5c07dab358af3576e8f4a73610ae64bc7b87cc5bf398993db16" exitCode=0 Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.823409 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" podStartSLOduration=125.823387143 podStartE2EDuration="2m5.823387143s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.819104471 +0000 UTC m=+146.783769462" watchObservedRunningTime="2026-02-19 00:08:40.823387143 +0000 UTC m=+146.788052134" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.860783 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.860830 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" event={"ID":"f3d1db96-e34d-4c20-9556-edec9e27858c","Type":"ContainerStarted","Data":"fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.860885 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" event={"ID":"e039dfef-e576-4bf7-8d87-9569ae038395","Type":"ContainerStarted","Data":"f28798233e76e90cbeefd938acd74c19ddce317d7cb16cf6a9ab84e59aecf3c0"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.860913 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9sgpt" event={"ID":"b8e3750c-2800-4e2b-90b5-7fb1c06af232","Type":"ContainerStarted","Data":"98b5af752c8c0ff17dd7d13f177894188780c3b4fcf79d6b2a0f2fbbe180ea20"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.860927 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" event={"ID":"956660a2-d159-418c-933b-147711dd34b9","Type":"ContainerDied","Data":"ea2b8c515d72f5c07dab358af3576e8f4a73610ae64bc7b87cc5bf398993db16"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.863043 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:40 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:40 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:40 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.863090 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.874800 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" event={"ID":"b24fb26c-0cf1-4094-8c3d-d274fee59a7a","Type":"ContainerStarted","Data":"77466f63e470a7ba0905bfdeb77ecad10990fba1fc36a1823e08985632edbfbd"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.874848 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" event={"ID":"b24fb26c-0cf1-4094-8c3d-d274fee59a7a","Type":"ContainerStarted","Data":"5b7c2aba963abac5190b38c72ee0a8cc98f5e307279f3351fd916e02ad8644ce"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.875870 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.902741 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xf92p" event={"ID":"d47bfc68-8f0f-4717-be7a-fccf98897cda","Type":"ContainerStarted","Data":"24b55ac5a4d2b289c9bb08e564b85bb3e787c213de6a376248c8b82cb4bef4c2"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.911667 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:40 crc kubenswrapper[4889]: E0219 00:08:40.917095 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.417070174 +0000 UTC m=+147.381735165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.966524 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" event={"ID":"951c61d4-aa85-4db8-8ddf-f7889e8f85ba","Type":"ContainerStarted","Data":"6b1453161212a973be6ecf790d595ef6ecc64c57697df06b842783e819112487"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.975558 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" podStartSLOduration=124.975527897 podStartE2EDuration="2m4.975527897s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:40.965851299 +0000 UTC m=+146.930516290" watchObservedRunningTime="2026-02-19 00:08:40.975527897 +0000 UTC m=+146.940192888" Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.982895 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-658hr" event={"ID":"cf74118f-1818-4680-98a6-a32fe3cc2725","Type":"ContainerStarted","Data":"73099a783ed903a2cae035f5056ea6aa125db0bf481dfeb941e92bc0b387a123"} Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.984938 4889 patch_prober.go:28] interesting pod/console-operator-58897d9998-658hr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 19 00:08:40 crc kubenswrapper[4889]: I0219 00:08:40.984971 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-658hr" podUID="cf74118f-1818-4680-98a6-a32fe3cc2725" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.008868 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jsmd" podStartSLOduration=125.008851076 podStartE2EDuration="2m5.008851076s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:41.007771213 +0000 UTC m=+146.972436204" watchObservedRunningTime="2026-02-19 00:08:41.008851076 +0000 UTC m=+146.973516077" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.009879 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" event={"ID":"7dc128cd-807d-4a60-9629-be4dbf53cf9b","Type":"ContainerStarted","Data":"a6bb2556474809c704e0aecf17d5a857992e09a64a0772649e6d02ce44e75524"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.009946 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" event={"ID":"7dc128cd-807d-4a60-9629-be4dbf53cf9b","Type":"ContainerStarted","Data":"e25ecdb8290fd00a51e1e4921fb8577b2cf029ac7a3e5f6e4da6438e2f1044a3"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.014344 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.016531 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.516504032 +0000 UTC m=+147.481169023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.021684 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.045131 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" event={"ID":"f7a4c945-2b4c-4b30-ad06-9158ce04018e","Type":"ContainerStarted","Data":"1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.050756 4889 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sfp24 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.050849 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.076293 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" event={"ID":"23cfc2c7-bd77-4361-b718-c0b6f13d8475","Type":"ContainerStarted","Data":"8d72cf8e675b4a4f050c01b5a82717fb5a37845a9fd00638ae22c569b1681f5b"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.116359 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.126152 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.626133045 +0000 UTC m=+147.590798036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.130881 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" event={"ID":"0041b2eb-f313-4b10-9f6c-f4431ddc93f5","Type":"ContainerStarted","Data":"5f3a708efe808fb059ca0393315e7f0769641628bf2ca94353fcc39069060389"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.156654 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" event={"ID":"4e55dbef-89d4-4f2f-a4db-714290b34dad","Type":"ContainerStarted","Data":"610721d6b4c3b9beea6cae821b6d4eb176c7ca70c144014fd156dda9f3ef80a7"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.157725 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.178166 4889 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-86j7r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" start-of-body= Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.178740 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" podUID="4e55dbef-89d4-4f2f-a4db-714290b34dad" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": dial tcp 10.217.0.24:5443: connect: connection refused" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.198650 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x7hdg" event={"ID":"7c086d0f-4dda-4123-b2e9-fe95519e76ff","Type":"ContainerStarted","Data":"dc4059dd6f171f699fb16c3c3ef1a24ad424dcadb83c9b90b75ca2e44eebb671"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.201600 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" event={"ID":"356a8f86-9fc9-4c30-accb-4866e29407fd","Type":"ContainerStarted","Data":"e5053fcbb082bd39a39ad1903e0e45c1caae8a458972072c18b06aafe50bffb4"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.243794 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-stmrq" podStartSLOduration=125.243773825 podStartE2EDuration="2m5.243773825s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:41.177768928 +0000 UTC m=+147.142433929" watchObservedRunningTime="2026-02-19 00:08:41.243773825 +0000 UTC m=+147.208438816" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.246380 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" podStartSLOduration=125.246365425 podStartE2EDuration="2m5.246365425s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:41.242371062 +0000 UTC m=+147.207036053" watchObservedRunningTime="2026-02-19 00:08:41.246365425 +0000 UTC m=+147.211030416" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.246719 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.248375 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.748352907 +0000 UTC m=+147.713017898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.278907 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79hxc" event={"ID":"d67eea0c-6059-4e76-8bdf-0fb3c25e2717","Type":"ContainerStarted","Data":"be8229ee56170c2638123b2a847bac7210c29e2d56a4eaaec25e861e720a6e7c"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.310841 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-x7hdg" podStartSLOduration=7.310808713 podStartE2EDuration="7.310808713s" podCreationTimestamp="2026-02-19 00:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:41.301626221 +0000 UTC m=+147.266291212" watchObservedRunningTime="2026-02-19 00:08:41.310808713 +0000 UTC m=+147.275473704" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.313643 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-mvjfx" event={"ID":"5ea4763c-95a2-4470-a1ac-a3d0f5fe49e5","Type":"ContainerStarted","Data":"9448dfb656b69c78ffd972a4abd7a22356a5cc4ddd143e1c3899d3b48430062a"} Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.317401 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.317469 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.333643 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.364039 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.372549 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.872533418 +0000 UTC m=+147.837198409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.466586 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.466890 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.966849779 +0000 UTC m=+147.931514770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.467330 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.467928 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:41.967918811 +0000 UTC m=+147.932583802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.568520 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.568714 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.06867468 +0000 UTC m=+148.033339671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.568825 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.569158 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.069150485 +0000 UTC m=+148.033815476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.669750 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.670147 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.170126051 +0000 UTC m=+148.134791042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.771613 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.772281 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.272255823 +0000 UTC m=+148.236920814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.852461 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:41 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:41 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:41 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.852579 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.873802 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.874021 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.373989372 +0000 UTC m=+148.338654363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.874234 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.874764 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.374741465 +0000 UTC m=+148.339406446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.975126 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.975265 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.475241766 +0000 UTC m=+148.439906757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:41 crc kubenswrapper[4889]: I0219 00:08:41.975320 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:41 crc kubenswrapper[4889]: E0219 00:08:41.975669 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.475657519 +0000 UTC m=+148.440322540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.076050 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.076206 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.576181981 +0000 UTC m=+148.540846972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.076816 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.077107 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.577095859 +0000 UTC m=+148.541760850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.177743 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.177899 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.677874689 +0000 UTC m=+148.642539680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.178042 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.178374 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.678363314 +0000 UTC m=+148.643028305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.279084 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.279323 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.779285748 +0000 UTC m=+148.743950739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.279506 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.279872 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.779857086 +0000 UTC m=+148.744522077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.341760 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9sgpt" event={"ID":"b8e3750c-2800-4e2b-90b5-7fb1c06af232","Type":"ContainerStarted","Data":"ecb2914d2f938f100b3953143480c4eab6ab5bc0e26de1ce62141e7e6ee11af1"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.341968 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.345297 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" event={"ID":"956660a2-d159-418c-933b-147711dd34b9","Type":"ContainerStarted","Data":"a262700ef5844da99bbd1745b07983b199cca40f657e3b9c594523f5d1c00c87"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.345549 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.348534 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" event={"ID":"2665bd86-15fa-4593-9d82-1e3753db401d","Type":"ContainerStarted","Data":"7f319743788adea83aec9b5d0daac477d3968fce1f71205d9c383d3483a5a12e"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.350406 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" event={"ID":"f8ed1e40-fb4c-4a2c-a713-d42036ee7138","Type":"ContainerStarted","Data":"ae55e140420ad3508eb580c9e738b402b49cd0764089e6f8d454a0a1dbc640d6"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.355865 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" event={"ID":"7dc128cd-807d-4a60-9629-be4dbf53cf9b","Type":"ContainerStarted","Data":"fb4505b12244dc25b88f6861243ba28dc8401673225d987ec2619cb695a6b6e4"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.358517 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" event={"ID":"cb574028-4d71-408f-9fe7-f2497636fffd","Type":"ContainerStarted","Data":"63e1e0cce6bb25ba2b12b72cb5e737fd374bfbb4aba0def2dddfbd34576071b8"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.358619 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" event={"ID":"cb574028-4d71-408f-9fe7-f2497636fffd","Type":"ContainerStarted","Data":"665bf699952aa3ff37ed5946c66a566843758ed78f0d105bff35fb7277be981c"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.365692 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" event={"ID":"c7c5e287-0d1b-4299-830f-9a32db4a5486","Type":"ContainerStarted","Data":"50afc4e83a4fd2e82f947dc255256b28cd6f4427c4ac08bfebf0205d108cf590"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.371319 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" event={"ID":"b24fb26c-0cf1-4094-8c3d-d274fee59a7a","Type":"ContainerStarted","Data":"c77602d7e66fef64c5678a80577997969f7bfe097b295ce68d11f69dd71a40ff"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.374378 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" event={"ID":"356a8f86-9fc9-4c30-accb-4866e29407fd","Type":"ContainerStarted","Data":"dfaaccd559ba542219749f1569cd098b369bdd1fca21cbbc48225c06a112e5b7"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.374447 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" event={"ID":"356a8f86-9fc9-4c30-accb-4866e29407fd","Type":"ContainerStarted","Data":"6a1fed842c7adc54ed39aff3c0e876fcba95ab68e6733da28ada30f30303678c"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.377536 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-pjqvk" event={"ID":"f7422e6b-7aee-4c6f-95e2-233d4040f239","Type":"ContainerStarted","Data":"b6eb9df5d58fb5eb7f80d00a477af267949df84d0bfc4d5eb98fa70fafb51c5a"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.384564 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.384723 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.884690941 +0000 UTC m=+148.849355932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.385869 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.385926 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.386055 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.386232 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.386307 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.396618 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.896595649 +0000 UTC m=+148.861260630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.398397 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" event={"ID":"e039dfef-e576-4bf7-8d87-9569ae038395","Type":"ContainerStarted","Data":"643b3fae117f95fb89d9703093daf3eac47254bae515b97295598fa2a7ec19ad"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.405354 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.427308 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.433625 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9sgpt" podStartSLOduration=8.433596379 podStartE2EDuration="8.433596379s" podCreationTimestamp="2026-02-19 00:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.384192485 +0000 UTC m=+148.348857486" watchObservedRunningTime="2026-02-19 00:08:42.433596379 +0000 UTC m=+148.398261370" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.443439 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" event={"ID":"0041b2eb-f313-4b10-9f6c-f4431ddc93f5","Type":"ContainerStarted","Data":"9244989bdc082a6f13c1cf7479eb5094680964b525353b46771be79d4521d85b"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.463438 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.472325 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.482940 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" event={"ID":"7ad046d9-2ae3-470e-9cde-bbed21290815","Type":"ContainerStarted","Data":"8b5a76de294c2a52639f1453f6270654ab98c3b5278358b769a82d4429f660e9"} Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.496752 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.497192 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:42.997175592 +0000 UTC m=+148.961840583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.498530 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.498626 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.498745 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-wzzvn" podStartSLOduration=126.49873232 podStartE2EDuration="2m6.49873232s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.436093046 +0000 UTC m=+148.400758037" watchObservedRunningTime="2026-02-19 00:08:42.49873232 +0000 UTC m=+148.463397311" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.499587 4889 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sfp24 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" start-of-body= Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.499616 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.43:8080/healthz\": dial tcp 10.217.0.43:8080: connect: connection refused" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.516751 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-grf28" podStartSLOduration=127.516727875 podStartE2EDuration="2m7.516727875s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.49778976 +0000 UTC m=+148.462454751" watchObservedRunningTime="2026-02-19 00:08:42.516727875 +0000 UTC m=+148.481392866" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.542383 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-m47cx" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.547378 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8w9hf" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.557446 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.592769 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.600477 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.605294 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.105274617 +0000 UTC m=+149.069939608 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.612438 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.619659 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" podStartSLOduration=126.619639321 podStartE2EDuration="2m6.619639321s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.618053821 +0000 UTC m=+148.582718812" watchObservedRunningTime="2026-02-19 00:08:42.619639321 +0000 UTC m=+148.584304302" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.668041 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h6lzz" podStartSLOduration=126.66801220400001 podStartE2EDuration="2m6.668012204s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.663931687 +0000 UTC m=+148.628596678" watchObservedRunningTime="2026-02-19 00:08:42.668012204 +0000 UTC m=+148.632677195" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.703947 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.705046 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.205022215 +0000 UTC m=+149.169687206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.805495 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.805845 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.305832446 +0000 UTC m=+149.270497437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.855802 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-658hr" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.855930 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:42 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:42 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:42 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.855960 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.908290 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:42 crc kubenswrapper[4889]: E0219 00:08:42.909150 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.409133223 +0000 UTC m=+149.373798214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:42 crc kubenswrapper[4889]: I0219 00:08:42.916363 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" podStartSLOduration=127.916329136 podStartE2EDuration="2m7.916329136s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.812541623 +0000 UTC m=+148.777206624" watchObservedRunningTime="2026-02-19 00:08:42.916329136 +0000 UTC m=+148.880994127" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.011925 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.012298 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.512286657 +0000 UTC m=+149.476951648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.019875 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bdl2b" podStartSLOduration=127.01985948 podStartE2EDuration="2m7.01985948s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:42.921610849 +0000 UTC m=+148.886275840" watchObservedRunningTime="2026-02-19 00:08:43.01985948 +0000 UTC m=+148.984524471" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.112553 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.112825 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.612813159 +0000 UTC m=+149.577478140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.223292 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.223794 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.723776403 +0000 UTC m=+149.688441394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.239139 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" podStartSLOduration=128.239119847 podStartE2EDuration="2m8.239119847s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:43.022269615 +0000 UTC m=+148.986934606" watchObservedRunningTime="2026-02-19 00:08:43.239119847 +0000 UTC m=+149.203784838" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.322630 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mp2px" podStartSLOduration=127.322612273 podStartE2EDuration="2m7.322612273s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:43.244065799 +0000 UTC m=+149.208730790" watchObservedRunningTime="2026-02-19 00:08:43.322612273 +0000 UTC m=+149.287277264" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.325931 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.326600 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.826574445 +0000 UTC m=+149.791239436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.430409 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.431452 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:43.931432731 +0000 UTC m=+149.896097722 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.489509 4889 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xdn9r container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.20:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.489627 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.20:6443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.489723 4889 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-86j7r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.489739 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" podUID="4e55dbef-89d4-4f2f-a4db-714290b34dad" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.533888 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.534297 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.034273964 +0000 UTC m=+149.998938955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.548303 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" event={"ID":"f8ed1e40-fb4c-4a2c-a713-d42036ee7138","Type":"ContainerStarted","Data":"eaa602f088b9bdc7fbdd12595113daa8b38eb8592abc06156c9fbc4d35f58eab"} Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.566637 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.635272 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.668463 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.168444734 +0000 UTC m=+150.133109725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.695921 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-rvjx7" podStartSLOduration=127.695892021 podStartE2EDuration="2m7.695892021s" podCreationTimestamp="2026-02-19 00:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:43.678763783 +0000 UTC m=+149.643428774" watchObservedRunningTime="2026-02-19 00:08:43.695892021 +0000 UTC m=+149.660557002" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.740390 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.740936 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.24091037 +0000 UTC m=+150.205575361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.844489 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.845374 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.345357533 +0000 UTC m=+150.310022524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.866665 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:43 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:43 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:43 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.866764 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:43 crc kubenswrapper[4889]: I0219 00:08:43.950933 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:43 crc kubenswrapper[4889]: E0219 00:08:43.951325 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.451305662 +0000 UTC m=+150.415970653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.053663 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.054364 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.554348552 +0000 UTC m=+150.519013543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.159682 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.160576 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.660555939 +0000 UTC m=+150.625220930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.265370 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.265819 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.765801687 +0000 UTC m=+150.730466678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.371209 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.371621 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.871595372 +0000 UTC m=+150.836260363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.473793 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.474409 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:44.974360212 +0000 UTC m=+150.939025203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.557874 4889 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-86j7r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.557931 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" podUID="4e55dbef-89d4-4f2f-a4db-714290b34dad" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.24:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.568993 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8cf1c176818d477be965cce0124dab898164a479dd67d0b98257ad12ab0000b8"} Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.570895 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" event={"ID":"f8ed1e40-fb4c-4a2c-a713-d42036ee7138","Type":"ContainerStarted","Data":"11fb85f540638a2e50625e0df66c50416a74a98a401c566ef63ed0a36162395b"} Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.571966 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"6a74dc6afabb16556c83c15c2d5faa076829f04b8d1d3ed04499b704d40466b2"} Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.572000 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4eed9acc3d1cce88e14f747a570c4e77eb2175c6f62f52f17bcd3fa3884511de"} Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.575939 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.576683 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.579009 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.078971041 +0000 UTC m=+151.043636122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.590708 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jhjcw"] Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.593898 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.596530 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"756853d6ca6aca6aff1d5b2c6d210834f2e7c0b54e396ea3d7b20e1c451fbea8"} Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.605597 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.611621 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhjcw"] Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.679162 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-catalog-content\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.679232 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcvt9\" (UniqueName: \"kubernetes.io/projected/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-kube-api-access-rcvt9\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.679396 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.679482 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-utilities\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.680958 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.180934087 +0000 UTC m=+151.145599078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.757893 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6jh8"] Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.759035 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.766915 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.783174 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6jh8"] Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.784097 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.784412 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-utilities\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.784497 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-catalog-content\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.784532 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcvt9\" (UniqueName: \"kubernetes.io/projected/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-kube-api-access-rcvt9\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.785054 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.285032499 +0000 UTC m=+151.249697500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.785601 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-utilities\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.785835 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-catalog-content\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.829354 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcvt9\" (UniqueName: \"kubernetes.io/projected/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-kube-api-access-rcvt9\") pod \"community-operators-jhjcw\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.852607 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:44 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:44 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:44 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.852678 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.888242 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-catalog-content\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.888289 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.888311 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9r6\" (UniqueName: \"kubernetes.io/projected/f09a3256-3dd8-4e60-bee8-379678cf15f7-kube-api-access-vf9r6\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.888429 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-utilities\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.889868 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.389851194 +0000 UTC m=+151.354516185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.894944 4889 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.917546 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.953546 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22jj8"] Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.962086 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.990935 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.991268 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf9r6\" (UniqueName: \"kubernetes.io/projected/f09a3256-3dd8-4e60-bee8-379678cf15f7-kube-api-access-vf9r6\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.991364 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-utilities\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.991401 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-catalog-content\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.991944 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-catalog-content\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:44 crc kubenswrapper[4889]: E0219 00:08:44.992076 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.492052767 +0000 UTC m=+151.456717758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:44 crc kubenswrapper[4889]: I0219 00:08:44.992668 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-utilities\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.091328 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf9r6\" (UniqueName: \"kubernetes.io/projected/f09a3256-3dd8-4e60-bee8-379678cf15f7-kube-api-access-vf9r6\") pod \"certified-operators-b6jh8\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.096040 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-catalog-content\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.096100 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b98x\" (UniqueName: \"kubernetes.io/projected/c85a27e0-27e4-4df6-8773-fb60d29697ff-kube-api-access-8b98x\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.096124 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-utilities\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.096204 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.096527 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.59651528 +0000 UTC m=+151.561180271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.100379 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22jj8"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.193672 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s7t7k"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.194749 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.196818 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.197066 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-catalog-content\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.197103 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b98x\" (UniqueName: \"kubernetes.io/projected/c85a27e0-27e4-4df6-8773-fb60d29697ff-kube-api-access-8b98x\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.197131 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-utilities\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.197676 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-catalog-content\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.197770 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.697753844 +0000 UTC m=+151.662418835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.198320 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-utilities\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.237113 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b98x\" (UniqueName: \"kubernetes.io/projected/c85a27e0-27e4-4df6-8773-fb60d29697ff-kube-api-access-8b98x\") pod \"community-operators-22jj8\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.247285 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7t7k"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.298596 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.298665 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-catalog-content\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.298699 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-utilities\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.298769 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flq2w\" (UniqueName: \"kubernetes.io/projected/e0bf937f-3956-40f1-9e52-d2000c46291c-kube-api-access-flq2w\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.302353 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.802332542 +0000 UTC m=+151.766997533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.315973 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.376732 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.400465 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.400656 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flq2w\" (UniqueName: \"kubernetes.io/projected/e0bf937f-3956-40f1-9e52-d2000c46291c-kube-api-access-flq2w\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.400730 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-catalog-content\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.400751 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-utilities\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.401126 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-utilities\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.401245 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:45.901229883 +0000 UTC m=+151.865894874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.401468 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-catalog-content\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.419403 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flq2w\" (UniqueName: \"kubernetes.io/projected/e0bf937f-3956-40f1-9e52-d2000c46291c-kube-api-access-flq2w\") pod \"certified-operators-s7t7k\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.502743 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.503163 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:46.003148688 +0000 UTC m=+151.967813679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.517305 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.530960 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2qk2n" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.568823 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jhjcw"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.608314 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.614129 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:46.114101232 +0000 UTC m=+152.078766223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.617691 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.618521 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.622236 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.622509 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.646372 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.716106 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7c3151f-242d-44be-b51f-f98cab7e680e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.716176 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.716211 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7c3151f-242d-44be-b51f-f98cab7e680e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.716618 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:08:46.216596795 +0000 UTC m=+152.181261786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9b8dc" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.719870 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" event={"ID":"f8ed1e40-fb4c-4a2c-a713-d42036ee7138","Type":"ContainerStarted","Data":"21d18c24ccaa91e675b9e09929247b75b3034f4ec6e55899e48ac503788a1e81"} Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.725794 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce894db799919a91c431aef5bd73a3aef647c856211d660e926670f977ad1d31"} Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.744577 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x66jj" podStartSLOduration=11.744556068 podStartE2EDuration="11.744556068s" podCreationTimestamp="2026-02-19 00:08:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:45.74270924 +0000 UTC m=+151.707374251" watchObservedRunningTime="2026-02-19 00:08:45.744556068 +0000 UTC m=+151.709221059" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.759331 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a2f3bb5d1b1426a88af5c4f9e02280e970106ac074c2d8d9b48c14a5c1a5cf38"} Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.818374 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.818818 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7c3151f-242d-44be-b51f-f98cab7e680e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.818963 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7c3151f-242d-44be-b51f-f98cab7e680e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: E0219 00:08:45.820446 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 00:08:46.320418209 +0000 UTC m=+152.285083200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.820502 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7c3151f-242d-44be-b51f-f98cab7e680e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.868032 4889 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T00:08:44.894966931Z","Handler":null,"Name":""} Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.869257 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7c3151f-242d-44be-b51f-f98cab7e680e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.880172 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:45 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:45 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:45 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.880271 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.910781 4889 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.910875 4889 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.914076 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22jj8"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.921137 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.929008 4889 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.929048 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.976776 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6jh8"] Feb 19 00:08:45 crc kubenswrapper[4889]: I0219 00:08:45.984551 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9b8dc\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.009897 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.024610 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7t7k"] Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.031175 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.039657 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 00:08:46 crc kubenswrapper[4889]: W0219 00:08:46.048769 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf09a3256_3dd8_4e60_bee8_379678cf15f7.slice/crio-30e6d7fca88c4c4a928e710bb2544a605a5e898a8d9f79dfe63ba5cb41b25032 WatchSource:0}: Error finding container 30e6d7fca88c4c4a928e710bb2544a605a5e898a8d9f79dfe63ba5cb41b25032: Status 404 returned error can't find the container with id 30e6d7fca88c4c4a928e710bb2544a605a5e898a8d9f79dfe63ba5cb41b25032 Feb 19 00:08:46 crc kubenswrapper[4889]: W0219 00:08:46.049275 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bf937f_3956_40f1_9e52_d2000c46291c.slice/crio-e11e3a33f69d61f8bfef3bc9350da5853008b0aec7d386a2e4f0b96ce67985c8 WatchSource:0}: Error finding container e11e3a33f69d61f8bfef3bc9350da5853008b0aec7d386a2e4f0b96ce67985c8: Status 404 returned error can't find the container with id e11e3a33f69d61f8bfef3bc9350da5853008b0aec7d386a2e4f0b96ce67985c8 Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.114697 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.220802 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.220875 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.235782 4889 patch_prober.go:28] interesting pod/apiserver-76f77b778f-zmbvn container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]log ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]etcd ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/max-in-flight-filter ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 00:08:46 crc kubenswrapper[4889]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 00:08:46 crc kubenswrapper[4889]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-startinformers ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 00:08:46 crc kubenswrapper[4889]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 00:08:46 crc kubenswrapper[4889]: livez check failed Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.236546 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" podUID="0041b2eb-f313-4b10-9f6c-f4431ddc93f5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.239577 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.239622 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.249761 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.289804 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.289893 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.292440 4889 patch_prober.go:28] interesting pod/console-f9d7485db-ft7cw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.292487 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ft7cw" podUID="d58c7e7a-3804-4b7a-bfb0-e79b50d92710" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.328109 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.419474 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9b8dc"] Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.701913 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.702437 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.701942 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.702564 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.733115 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.752230 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vgwtt"] Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.753642 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.757031 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.767233 4889 generic.go:334] "Generic (PLEG): container finished" podID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerID="3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5" exitCode=0 Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.767323 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6jh8" event={"ID":"f09a3256-3dd8-4e60-bee8-379678cf15f7","Type":"ContainerDied","Data":"3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.767372 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6jh8" event={"ID":"f09a3256-3dd8-4e60-bee8-379678cf15f7","Type":"ContainerStarted","Data":"30e6d7fca88c4c4a928e710bb2544a605a5e898a8d9f79dfe63ba5cb41b25032"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.768890 4889 generic.go:334] "Generic (PLEG): container finished" podID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerID="ad5383c5ce66e0db0cc9f15e04ef6498e8a16c8db6ec90da31ea23a2cd088aed" exitCode=0 Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.768973 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerDied","Data":"ad5383c5ce66e0db0cc9f15e04ef6498e8a16c8db6ec90da31ea23a2cd088aed"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.769006 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerStarted","Data":"a0dbfd4ce634c7bca6fe7da4651235480a9a3605ef32f5b88761740a57de3e63"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.769165 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.771577 4889 generic.go:334] "Generic (PLEG): container finished" podID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerID="5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288" exitCode=0 Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.771633 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7t7k" event={"ID":"e0bf937f-3956-40f1-9e52-d2000c46291c","Type":"ContainerDied","Data":"5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.771709 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7t7k" event={"ID":"e0bf937f-3956-40f1-9e52-d2000c46291c","Type":"ContainerStarted","Data":"e11e3a33f69d61f8bfef3bc9350da5853008b0aec7d386a2e4f0b96ce67985c8"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.774454 4889 generic.go:334] "Generic (PLEG): container finished" podID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerID="57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7" exitCode=0 Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.774497 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhjcw" event={"ID":"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5","Type":"ContainerDied","Data":"57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.774531 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhjcw" event={"ID":"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5","Type":"ContainerStarted","Data":"a93aa30a790eb336d47049b164a5d12a1d467eb80c9a3fa0130d6ddace141151"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.781029 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" event={"ID":"585fe6a3-bec2-42fb-bc1c-75203481f19a","Type":"ContainerStarted","Data":"278a736491d8ac064dce76afd169a4e0d60c517d01c3d55cc07475b657525d49"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.781080 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" event={"ID":"585fe6a3-bec2-42fb-bc1c-75203481f19a","Type":"ContainerStarted","Data":"de6ce268f50ee5382839c8c1b5a2e09fb4b8b741498f7c9a0ce191c3f4e03c85"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.781242 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.784103 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7c3151f-242d-44be-b51f-f98cab7e680e","Type":"ContainerStarted","Data":"631e57eedf82cf70d5661796e463f2e47ee91401f1b3104227c4aaf81899b615"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.784140 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7c3151f-242d-44be-b51f-f98cab7e680e","Type":"ContainerStarted","Data":"66951b6a093a0c3ece457edbcba6a01e6ccc48cc8ba1963a9c7997592ca2ccad"} Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.815744 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x9kbz" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.818583 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.842141 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgwtt"] Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.860443 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.861369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-catalog-content\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.861419 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87j65\" (UniqueName: \"kubernetes.io/projected/1eb8ee4a-7192-4edc-a132-248289edb91f-kube-api-access-87j65\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.861450 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-utilities\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.864615 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.864583019 podStartE2EDuration="1.864583019s" podCreationTimestamp="2026-02-19 00:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:46.860579915 +0000 UTC m=+152.825244896" watchObservedRunningTime="2026-02-19 00:08:46.864583019 +0000 UTC m=+152.829248010" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.869611 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:46 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:46 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:46 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.869707 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.950564 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" podStartSLOduration=131.950541251 podStartE2EDuration="2m11.950541251s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:46.926715395 +0000 UTC m=+152.891380386" watchObservedRunningTime="2026-02-19 00:08:46.950541251 +0000 UTC m=+152.915206242" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.962171 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-utilities\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.962547 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-catalog-content\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.962637 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87j65\" (UniqueName: \"kubernetes.io/projected/1eb8ee4a-7192-4edc-a132-248289edb91f-kube-api-access-87j65\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.963112 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-utilities\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.964891 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-catalog-content\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:46 crc kubenswrapper[4889]: I0219 00:08:46.992456 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87j65\" (UniqueName: \"kubernetes.io/projected/1eb8ee4a-7192-4edc-a132-248289edb91f-kube-api-access-87j65\") pod \"redhat-marketplace-vgwtt\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.070146 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.157863 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpfj"] Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.160122 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.170313 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-86j7r" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.182358 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpfj"] Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.267345 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-utilities\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.267429 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d959\" (UniqueName: \"kubernetes.io/projected/4841b44f-786c-4f3d-af26-c6ae5b08eee4-kube-api-access-4d959\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.267502 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-catalog-content\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.369512 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-utilities\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.369609 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d959\" (UniqueName: \"kubernetes.io/projected/4841b44f-786c-4f3d-af26-c6ae5b08eee4-kube-api-access-4d959\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.369673 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-catalog-content\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.370470 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-utilities\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.371039 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-catalog-content\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.395173 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d959\" (UniqueName: \"kubernetes.io/projected/4841b44f-786c-4f3d-af26-c6ae5b08eee4-kube-api-access-4d959\") pod \"redhat-marketplace-ghpfj\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.498044 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.570410 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgwtt"] Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.747645 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j6glr"] Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.748967 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.753314 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.766126 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6glr"] Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.842519 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgwtt" event={"ID":"1eb8ee4a-7192-4edc-a132-248289edb91f","Type":"ContainerStarted","Data":"1b2cd812b1a88f5ce22470006d77674454a03a3bf349e3ed74b0528c593c345c"} Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.855580 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:47 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:47 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:47 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.855666 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.871189 4889 generic.go:334] "Generic (PLEG): container finished" podID="b7c3151f-242d-44be-b51f-f98cab7e680e" containerID="631e57eedf82cf70d5661796e463f2e47ee91401f1b3104227c4aaf81899b615" exitCode=0 Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.871292 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7c3151f-242d-44be-b51f-f98cab7e680e","Type":"ContainerDied","Data":"631e57eedf82cf70d5661796e463f2e47ee91401f1b3104227c4aaf81899b615"} Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.892116 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlnw\" (UniqueName: \"kubernetes.io/projected/39c55028-d3da-4bad-92f6-34e5250a9276-kube-api-access-bzlnw\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.892243 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-catalog-content\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.892275 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-utilities\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.927566 4889 generic.go:334] "Generic (PLEG): container finished" podID="ac4fd89a-3d15-4886-ad35-318136b7a519" containerID="932e029ffd683602da62f37daec1f4c3921a9070456e471d192aae035357164c" exitCode=0 Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.928180 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" event={"ID":"ac4fd89a-3d15-4886-ad35-318136b7a519","Type":"ContainerDied","Data":"932e029ffd683602da62f37daec1f4c3921a9070456e471d192aae035357164c"} Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.987242 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpfj"] Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.996160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-catalog-content\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.996237 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-utilities\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.996306 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzlnw\" (UniqueName: \"kubernetes.io/projected/39c55028-d3da-4bad-92f6-34e5250a9276-kube-api-access-bzlnw\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.997275 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-catalog-content\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:47 crc kubenswrapper[4889]: I0219 00:08:47.997630 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-utilities\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.025079 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzlnw\" (UniqueName: \"kubernetes.io/projected/39c55028-d3da-4bad-92f6-34e5250a9276-kube-api-access-bzlnw\") pod \"redhat-operators-j6glr\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:48 crc kubenswrapper[4889]: W0219 00:08:48.036189 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4841b44f_786c_4f3d_af26_c6ae5b08eee4.slice/crio-b8e3e528b216a60b1e90c1fe9f8b18319987556764ac825f4edef64ea518f57d WatchSource:0}: Error finding container b8e3e528b216a60b1e90c1fe9f8b18319987556764ac825f4edef64ea518f57d: Status 404 returned error can't find the container with id b8e3e528b216a60b1e90c1fe9f8b18319987556764ac825f4edef64ea518f57d Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.081968 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.153338 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x2ncv"] Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.161649 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2ncv"] Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.163493 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.300333 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-utilities\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.300906 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt756\" (UniqueName: \"kubernetes.io/projected/5a2ae970-7f30-494c-ae36-bf25f120c59a-kube-api-access-xt756\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.300965 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-catalog-content\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.402627 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-utilities\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.402680 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt756\" (UniqueName: \"kubernetes.io/projected/5a2ae970-7f30-494c-ae36-bf25f120c59a-kube-api-access-xt756\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.402734 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-catalog-content\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.403397 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-catalog-content\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.403520 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-utilities\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.434014 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt756\" (UniqueName: \"kubernetes.io/projected/5a2ae970-7f30-494c-ae36-bf25f120c59a-kube-api-access-xt756\") pod \"redhat-operators-x2ncv\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.503111 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6glr"] Feb 19 00:08:48 crc kubenswrapper[4889]: W0219 00:08:48.514419 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c55028_d3da_4bad_92f6_34e5250a9276.slice/crio-b5f5e77d3b9e7af5328489865c2479f1e029d11ec9ad30d9fc35e9da1f0b701d WatchSource:0}: Error finding container b5f5e77d3b9e7af5328489865c2479f1e029d11ec9ad30d9fc35e9da1f0b701d: Status 404 returned error can't find the container with id b5f5e77d3b9e7af5328489865c2479f1e029d11ec9ad30d9fc35e9da1f0b701d Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.520603 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.851984 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:48 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:48 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:48 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.852469 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.915031 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x2ncv"] Feb 19 00:08:48 crc kubenswrapper[4889]: W0219 00:08:48.928070 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2ae970_7f30_494c_ae36_bf25f120c59a.slice/crio-9df14ab96c581374efefe1cbbc25f7ac0cfb654fd90a5b8fb75a66e7397525ef WatchSource:0}: Error finding container 9df14ab96c581374efefe1cbbc25f7ac0cfb654fd90a5b8fb75a66e7397525ef: Status 404 returned error can't find the container with id 9df14ab96c581374efefe1cbbc25f7ac0cfb654fd90a5b8fb75a66e7397525ef Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.952305 4889 generic.go:334] "Generic (PLEG): container finished" podID="39c55028-d3da-4bad-92f6-34e5250a9276" containerID="ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411" exitCode=0 Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.952417 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerDied","Data":"ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411"} Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.952460 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerStarted","Data":"b5f5e77d3b9e7af5328489865c2479f1e029d11ec9ad30d9fc35e9da1f0b701d"} Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.961488 4889 generic.go:334] "Generic (PLEG): container finished" podID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerID="14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea" exitCode=0 Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.961547 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpfj" event={"ID":"4841b44f-786c-4f3d-af26-c6ae5b08eee4","Type":"ContainerDied","Data":"14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea"} Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.961639 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpfj" event={"ID":"4841b44f-786c-4f3d-af26-c6ae5b08eee4","Type":"ContainerStarted","Data":"b8e3e528b216a60b1e90c1fe9f8b18319987556764ac825f4edef64ea518f57d"} Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.967211 4889 generic.go:334] "Generic (PLEG): container finished" podID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerID="18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700" exitCode=0 Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.967309 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgwtt" event={"ID":"1eb8ee4a-7192-4edc-a132-248289edb91f","Type":"ContainerDied","Data":"18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700"} Feb 19 00:08:48 crc kubenswrapper[4889]: I0219 00:08:48.973858 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerStarted","Data":"9df14ab96c581374efefe1cbbc25f7ac0cfb654fd90a5b8fb75a66e7397525ef"} Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.270567 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.289212 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.424354 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7c3151f-242d-44be-b51f-f98cab7e680e-kubelet-dir\") pod \"b7c3151f-242d-44be-b51f-f98cab7e680e\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.424435 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7c3151f-242d-44be-b51f-f98cab7e680e-kube-api-access\") pod \"b7c3151f-242d-44be-b51f-f98cab7e680e\" (UID: \"b7c3151f-242d-44be-b51f-f98cab7e680e\") " Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.424519 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4fd89a-3d15-4886-ad35-318136b7a519-secret-volume\") pod \"ac4fd89a-3d15-4886-ad35-318136b7a519\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.424543 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4fd89a-3d15-4886-ad35-318136b7a519-config-volume\") pod \"ac4fd89a-3d15-4886-ad35-318136b7a519\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.424588 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rgjh\" (UniqueName: \"kubernetes.io/projected/ac4fd89a-3d15-4886-ad35-318136b7a519-kube-api-access-6rgjh\") pod \"ac4fd89a-3d15-4886-ad35-318136b7a519\" (UID: \"ac4fd89a-3d15-4886-ad35-318136b7a519\") " Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.425149 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7c3151f-242d-44be-b51f-f98cab7e680e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b7c3151f-242d-44be-b51f-f98cab7e680e" (UID: "b7c3151f-242d-44be-b51f-f98cab7e680e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.426315 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac4fd89a-3d15-4886-ad35-318136b7a519-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac4fd89a-3d15-4886-ad35-318136b7a519" (UID: "ac4fd89a-3d15-4886-ad35-318136b7a519"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.432483 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac4fd89a-3d15-4886-ad35-318136b7a519-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac4fd89a-3d15-4886-ad35-318136b7a519" (UID: "ac4fd89a-3d15-4886-ad35-318136b7a519"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.432677 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4fd89a-3d15-4886-ad35-318136b7a519-kube-api-access-6rgjh" (OuterVolumeSpecName: "kube-api-access-6rgjh") pod "ac4fd89a-3d15-4886-ad35-318136b7a519" (UID: "ac4fd89a-3d15-4886-ad35-318136b7a519"). InnerVolumeSpecName "kube-api-access-6rgjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.433046 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c3151f-242d-44be-b51f-f98cab7e680e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b7c3151f-242d-44be-b51f-f98cab7e680e" (UID: "b7c3151f-242d-44be-b51f-f98cab7e680e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.536266 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rgjh\" (UniqueName: \"kubernetes.io/projected/ac4fd89a-3d15-4886-ad35-318136b7a519-kube-api-access-6rgjh\") on node \"crc\" DevicePath \"\"" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.536315 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7c3151f-242d-44be-b51f-f98cab7e680e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.536325 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b7c3151f-242d-44be-b51f-f98cab7e680e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.536334 4889 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac4fd89a-3d15-4886-ad35-318136b7a519-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.536343 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac4fd89a-3d15-4886-ad35-318136b7a519-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.582258 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 00:08:49 crc kubenswrapper[4889]: E0219 00:08:49.582796 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c3151f-242d-44be-b51f-f98cab7e680e" containerName="pruner" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.582815 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c3151f-242d-44be-b51f-f98cab7e680e" containerName="pruner" Feb 19 00:08:49 crc kubenswrapper[4889]: E0219 00:08:49.582860 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4fd89a-3d15-4886-ad35-318136b7a519" containerName="collect-profiles" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.582868 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4fd89a-3d15-4886-ad35-318136b7a519" containerName="collect-profiles" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.583068 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c3151f-242d-44be-b51f-f98cab7e680e" containerName="pruner" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.583116 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4fd89a-3d15-4886-ad35-318136b7a519" containerName="collect-profiles" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.583967 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.586308 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.590898 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.594094 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.739636 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09d59988-955a-49db-a01d-26c51ac2d056-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.739867 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09d59988-955a-49db-a01d-26c51ac2d056-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.840854 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09d59988-955a-49db-a01d-26c51ac2d056-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.840907 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09d59988-955a-49db-a01d-26c51ac2d056-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.840995 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09d59988-955a-49db-a01d-26c51ac2d056-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.850023 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:49 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:49 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:49 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.850149 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.860297 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09d59988-955a-49db-a01d-26c51ac2d056-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:49 crc kubenswrapper[4889]: I0219 00:08:49.925349 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.023491 4889 generic.go:334] "Generic (PLEG): container finished" podID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerID="57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d" exitCode=0 Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.023530 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerDied","Data":"57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d"} Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.030741 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.030741 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-7s7mv" event={"ID":"ac4fd89a-3d15-4886-ad35-318136b7a519","Type":"ContainerDied","Data":"a4f53ad009a60a7f3271e6861446e19904d14607f8551e0a98f0c9e3d0d2a7f9"} Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.030815 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4f53ad009a60a7f3271e6861446e19904d14607f8551e0a98f0c9e3d0d2a7f9" Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.035386 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b7c3151f-242d-44be-b51f-f98cab7e680e","Type":"ContainerDied","Data":"66951b6a093a0c3ece457edbcba6a01e6ccc48cc8ba1963a9c7997592ca2ccad"} Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.035467 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66951b6a093a0c3ece457edbcba6a01e6ccc48cc8ba1963a9c7997592ca2ccad" Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.035564 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.274266 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 00:08:50 crc kubenswrapper[4889]: W0219 00:08:50.296873 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod09d59988_955a_49db_a01d_26c51ac2d056.slice/crio-9a57659703b920dea590a612ecf98c16b4d0c85fb9aeb036f0acaee05699ed79 WatchSource:0}: Error finding container 9a57659703b920dea590a612ecf98c16b4d0c85fb9aeb036f0acaee05699ed79: Status 404 returned error can't find the container with id 9a57659703b920dea590a612ecf98c16b4d0c85fb9aeb036f0acaee05699ed79 Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.852445 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:50 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:50 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:50 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:50 crc kubenswrapper[4889]: I0219 00:08:50.852998 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.046584 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09d59988-955a-49db-a01d-26c51ac2d056","Type":"ContainerStarted","Data":"9f7e9be8c271a4bafe303bce8bb4ec0093b5f1d6b6f1e5fadf598506fb7d884b"} Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.046638 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09d59988-955a-49db-a01d-26c51ac2d056","Type":"ContainerStarted","Data":"9a57659703b920dea590a612ecf98c16b4d0c85fb9aeb036f0acaee05699ed79"} Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.074943 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.074860087 podStartE2EDuration="2.074860087s" podCreationTimestamp="2026-02-19 00:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:08:51.065697624 +0000 UTC m=+157.030362615" watchObservedRunningTime="2026-02-19 00:08:51.074860087 +0000 UTC m=+157.039525078" Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.225878 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.232556 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-zmbvn" Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.726613 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9sgpt" Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.851330 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:51 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:51 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:51 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:51 crc kubenswrapper[4889]: I0219 00:08:51.851417 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:52 crc kubenswrapper[4889]: I0219 00:08:52.150511 4889 generic.go:334] "Generic (PLEG): container finished" podID="09d59988-955a-49db-a01d-26c51ac2d056" containerID="9f7e9be8c271a4bafe303bce8bb4ec0093b5f1d6b6f1e5fadf598506fb7d884b" exitCode=0 Feb 19 00:08:52 crc kubenswrapper[4889]: I0219 00:08:52.153006 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09d59988-955a-49db-a01d-26c51ac2d056","Type":"ContainerDied","Data":"9f7e9be8c271a4bafe303bce8bb4ec0093b5f1d6b6f1e5fadf598506fb7d884b"} Feb 19 00:08:52 crc kubenswrapper[4889]: I0219 00:08:52.849920 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:52 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:52 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:52 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:52 crc kubenswrapper[4889]: I0219 00:08:52.850358 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:53 crc kubenswrapper[4889]: I0219 00:08:53.849705 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:53 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:53 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:53 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:53 crc kubenswrapper[4889]: I0219 00:08:53.849820 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:54 crc kubenswrapper[4889]: I0219 00:08:54.847949 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:54 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:54 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:54 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:54 crc kubenswrapper[4889]: I0219 00:08:54.848384 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:55 crc kubenswrapper[4889]: I0219 00:08:55.848122 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:55 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:55 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:55 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:55 crc kubenswrapper[4889]: I0219 00:08:55.848283 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.291375 4889 patch_prober.go:28] interesting pod/console-f9d7485db-ft7cw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.291543 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-ft7cw" podUID="d58c7e7a-3804-4b7a-bfb0-e79b50d92710" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.701990 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.702051 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.702143 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.702257 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.849355 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:56 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:56 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:56 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:56 crc kubenswrapper[4889]: I0219 00:08:56.849481 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:57 crc kubenswrapper[4889]: I0219 00:08:57.848822 4889 patch_prober.go:28] interesting pod/router-default-5444994796-xf92p container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:08:57 crc kubenswrapper[4889]: [-]has-synced failed: reason withheld Feb 19 00:08:57 crc kubenswrapper[4889]: [+]process-running ok Feb 19 00:08:57 crc kubenswrapper[4889]: healthz check failed Feb 19 00:08:57 crc kubenswrapper[4889]: I0219 00:08:57.851475 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xf92p" podUID="d47bfc68-8f0f-4717-be7a-fccf98897cda" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:08:58 crc kubenswrapper[4889]: I0219 00:08:58.853449 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:58 crc kubenswrapper[4889]: I0219 00:08:58.855947 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xf92p" Feb 19 00:08:58 crc kubenswrapper[4889]: I0219 00:08:58.942063 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:58 crc kubenswrapper[4889]: I0219 00:08:58.976972 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66e9b544-b66c-43d6-8d8d-d6231a70a6be-metrics-certs\") pod \"network-metrics-daemon-sw97l\" (UID: \"66e9b544-b66c-43d6-8d8d-d6231a70a6be\") " pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:08:59 crc kubenswrapper[4889]: I0219 00:08:59.073384 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sw97l" Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.423164 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.483982 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09d59988-955a-49db-a01d-26c51ac2d056-kubelet-dir\") pod \"09d59988-955a-49db-a01d-26c51ac2d056\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.484102 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09d59988-955a-49db-a01d-26c51ac2d056-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "09d59988-955a-49db-a01d-26c51ac2d056" (UID: "09d59988-955a-49db-a01d-26c51ac2d056"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.484290 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09d59988-955a-49db-a01d-26c51ac2d056-kube-api-access\") pod \"09d59988-955a-49db-a01d-26c51ac2d056\" (UID: \"09d59988-955a-49db-a01d-26c51ac2d056\") " Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.484641 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09d59988-955a-49db-a01d-26c51ac2d056-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.512451 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09d59988-955a-49db-a01d-26c51ac2d056-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "09d59988-955a-49db-a01d-26c51ac2d056" (UID: "09d59988-955a-49db-a01d-26c51ac2d056"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:09:01 crc kubenswrapper[4889]: I0219 00:09:01.586397 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/09d59988-955a-49db-a01d-26c51ac2d056-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:02 crc kubenswrapper[4889]: I0219 00:09:02.333259 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"09d59988-955a-49db-a01d-26c51ac2d056","Type":"ContainerDied","Data":"9a57659703b920dea590a612ecf98c16b4d0c85fb9aeb036f0acaee05699ed79"} Feb 19 00:09:02 crc kubenswrapper[4889]: I0219 00:09:02.333346 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a57659703b920dea590a612ecf98c16b4d0c85fb9aeb036f0acaee05699ed79" Feb 19 00:09:02 crc kubenswrapper[4889]: I0219 00:09:02.333442 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.120658 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.297128 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.302063 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-ft7cw" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.702277 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.702354 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.702392 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.702482 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.702564 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.703459 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.703458 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"86435c9d5ed005ac304fa6f316d9fa2faac2df357752e35bdfa3b66006e06fbf"} pod="openshift-console/downloads-7954f5f757-fp64s" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.703535 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:06 crc kubenswrapper[4889]: I0219 00:09:06.703580 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" containerID="cri-o://86435c9d5ed005ac304fa6f316d9fa2faac2df357752e35bdfa3b66006e06fbf" gracePeriod=2 Feb 19 00:09:07 crc kubenswrapper[4889]: I0219 00:09:07.401284 4889 generic.go:334] "Generic (PLEG): container finished" podID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerID="86435c9d5ed005ac304fa6f316d9fa2faac2df357752e35bdfa3b66006e06fbf" exitCode=0 Feb 19 00:09:07 crc kubenswrapper[4889]: I0219 00:09:07.401342 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fp64s" event={"ID":"61f5ec49-57ce-4a4a-86b6-fed43a63c82e","Type":"ContainerDied","Data":"86435c9d5ed005ac304fa6f316d9fa2faac2df357752e35bdfa3b66006e06fbf"} Feb 19 00:09:07 crc kubenswrapper[4889]: I0219 00:09:07.781675 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:09:07 crc kubenswrapper[4889]: I0219 00:09:07.781786 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:09:14 crc kubenswrapper[4889]: I0219 00:09:14.161488 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sw97l"] Feb 19 00:09:15 crc kubenswrapper[4889]: I0219 00:09:15.459706 4889 generic.go:334] "Generic (PLEG): container finished" podID="0bdda273-5648-4fa5-868c-48142d764012" containerID="1d8b9e2e5413b7609abd9500233b06d490454c34b4ea3e19cfd726c3d448a642" exitCode=0 Feb 19 00:09:15 crc kubenswrapper[4889]: I0219 00:09:15.459926 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-vf894" event={"ID":"0bdda273-5648-4fa5-868c-48142d764012","Type":"ContainerDied","Data":"1d8b9e2e5413b7609abd9500233b06d490454c34b4ea3e19cfd726c3d448a642"} Feb 19 00:09:15 crc kubenswrapper[4889]: E0219 00:09:15.682925 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 00:09:15 crc kubenswrapper[4889]: E0219 00:09:15.683202 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rcvt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jhjcw_openshift-marketplace(0533f084-8fff-43b9-b7d6-a4fc0d6b85c5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:09:15 crc kubenswrapper[4889]: E0219 00:09:15.684444 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jhjcw" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" Feb 19 00:09:16 crc kubenswrapper[4889]: I0219 00:09:16.716689 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:16 crc kubenswrapper[4889]: I0219 00:09:16.717147 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:17 crc kubenswrapper[4889]: I0219 00:09:17.248297 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-8sg2r" Feb 19 00:09:18 crc kubenswrapper[4889]: E0219 00:09:18.305034 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jhjcw" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" Feb 19 00:09:18 crc kubenswrapper[4889]: E0219 00:09:18.727072 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 00:09:18 crc kubenswrapper[4889]: E0219 00:09:18.727609 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flq2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s7t7k_openshift-marketplace(e0bf937f-3956-40f1-9e52-d2000c46291c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:09:18 crc kubenswrapper[4889]: E0219 00:09:18.730303 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s7t7k" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" Feb 19 00:09:18 crc kubenswrapper[4889]: W0219 00:09:18.747357 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e9b544_b66c_43d6_8d8d_d6231a70a6be.slice/crio-968772e19205b8999b0fecc72b1a8dbd2e721ab67bfa25f508f06b86abdf9024 WatchSource:0}: Error finding container 968772e19205b8999b0fecc72b1a8dbd2e721ab67bfa25f508f06b86abdf9024: Status 404 returned error can't find the container with id 968772e19205b8999b0fecc72b1a8dbd2e721ab67bfa25f508f06b86abdf9024 Feb 19 00:09:19 crc kubenswrapper[4889]: I0219 00:09:19.487336 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sw97l" event={"ID":"66e9b544-b66c-43d6-8d8d-d6231a70a6be","Type":"ContainerStarted","Data":"968772e19205b8999b0fecc72b1a8dbd2e721ab67bfa25f508f06b86abdf9024"} Feb 19 00:09:20 crc kubenswrapper[4889]: E0219 00:09:20.264781 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s7t7k" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" Feb 19 00:09:20 crc kubenswrapper[4889]: E0219 00:09:20.352591 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 00:09:20 crc kubenswrapper[4889]: E0219 00:09:20.352866 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4d959,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ghpfj_openshift-marketplace(4841b44f-786c-4f3d-af26-c6ae5b08eee4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:09:20 crc kubenswrapper[4889]: E0219 00:09:20.354364 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ghpfj" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" Feb 19 00:09:22 crc kubenswrapper[4889]: I0219 00:09:22.624800 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.177377 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 00:09:24 crc kubenswrapper[4889]: E0219 00:09:24.177597 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09d59988-955a-49db-a01d-26c51ac2d056" containerName="pruner" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.177610 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="09d59988-955a-49db-a01d-26c51ac2d056" containerName="pruner" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.177710 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="09d59988-955a-49db-a01d-26c51ac2d056" containerName="pruner" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.178055 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.186320 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.186579 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.190967 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.289853 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.289982 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.391091 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.391193 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.391301 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.416267 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:24 crc kubenswrapper[4889]: I0219 00:09:24.506158 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:25 crc kubenswrapper[4889]: E0219 00:09:25.319765 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ghpfj" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.370916 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.509185 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwpp\" (UniqueName: \"kubernetes.io/projected/0bdda273-5648-4fa5-868c-48142d764012-kube-api-access-gwwpp\") pod \"0bdda273-5648-4fa5-868c-48142d764012\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.509316 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0bdda273-5648-4fa5-868c-48142d764012-serviceca\") pod \"0bdda273-5648-4fa5-868c-48142d764012\" (UID: \"0bdda273-5648-4fa5-868c-48142d764012\") " Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.512643 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdda273-5648-4fa5-868c-48142d764012-serviceca" (OuterVolumeSpecName: "serviceca") pod "0bdda273-5648-4fa5-868c-48142d764012" (UID: "0bdda273-5648-4fa5-868c-48142d764012"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.520047 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdda273-5648-4fa5-868c-48142d764012-kube-api-access-gwwpp" (OuterVolumeSpecName: "kube-api-access-gwwpp") pod "0bdda273-5648-4fa5-868c-48142d764012" (UID: "0bdda273-5648-4fa5-868c-48142d764012"). InnerVolumeSpecName "kube-api-access-gwwpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.527365 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-vf894" event={"ID":"0bdda273-5648-4fa5-868c-48142d764012","Type":"ContainerDied","Data":"57543aa6d76858e3278ccfaaef87337535a02db3197525a9a48e71b3276f3f0a"} Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.527415 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57543aa6d76858e3278ccfaaef87337535a02db3197525a9a48e71b3276f3f0a" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.527482 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-vf894" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.611284 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwpp\" (UniqueName: \"kubernetes.io/projected/0bdda273-5648-4fa5-868c-48142d764012-kube-api-access-gwwpp\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:25 crc kubenswrapper[4889]: I0219 00:09:25.611717 4889 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0bdda273-5648-4fa5-868c-48142d764012-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:25 crc kubenswrapper[4889]: E0219 00:09:25.828670 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 00:09:25 crc kubenswrapper[4889]: E0219 00:09:25.829298 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xt756,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-x2ncv_openshift-marketplace(5a2ae970-7f30-494c-ae36-bf25f120c59a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:09:25 crc kubenswrapper[4889]: E0219 00:09:25.830553 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-x2ncv" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.135828 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.136515 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bzlnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j6glr_openshift-marketplace(39c55028-d3da-4bad-92f6-34e5250a9276): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.138055 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j6glr" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.255151 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.534544 4889 generic.go:334] "Generic (PLEG): container finished" podID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerID="90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323" exitCode=0 Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.534729 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6jh8" event={"ID":"f09a3256-3dd8-4e60-bee8-379678cf15f7","Type":"ContainerDied","Data":"90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323"} Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.537102 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerStarted","Data":"28d8a462947b32ba78d01aab56728a06f8baddb6453366ce8195ef16e5f2c442"} Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.540723 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fp64s" event={"ID":"61f5ec49-57ce-4a4a-86b6-fed43a63c82e","Type":"ContainerStarted","Data":"e5656a26e414d08d2abbf2e6158c86ead79616dac392cc5080a2ec0b43347e04"} Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.541427 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.541549 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.541799 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.542613 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"42b45864-8ca3-42ae-9caf-b37a91d91bd2","Type":"ContainerStarted","Data":"a89716eadd0c8a6f69765170532935703789bc88d67c1afe2688c251fb39e8a5"} Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.544397 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sw97l" event={"ID":"66e9b544-b66c-43d6-8d8d-d6231a70a6be","Type":"ContainerStarted","Data":"21bc68fec5ac21b87844ca0bb52f5951db3a982fed64ee346c9fea5cb96c51bc"} Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.545049 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j6glr" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.547061 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-x2ncv" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.701837 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.701903 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.701919 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:26 crc kubenswrapper[4889]: I0219 00:09:26.701969 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.802047 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.802470 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87j65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vgwtt_openshift-marketplace(1eb8ee4a-7192-4edc-a132-248289edb91f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:09:26 crc kubenswrapper[4889]: E0219 00:09:26.803833 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vgwtt" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.554267 4889 generic.go:334] "Generic (PLEG): container finished" podID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerID="28d8a462947b32ba78d01aab56728a06f8baddb6453366ce8195ef16e5f2c442" exitCode=0 Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.554624 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerDied","Data":"28d8a462947b32ba78d01aab56728a06f8baddb6453366ce8195ef16e5f2c442"} Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.557659 4889 generic.go:334] "Generic (PLEG): container finished" podID="42b45864-8ca3-42ae-9caf-b37a91d91bd2" containerID="9c12963e6394b6923825f9dea9b2630a9831e7466b499039a1347a7bbd90f77d" exitCode=0 Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.557793 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"42b45864-8ca3-42ae-9caf-b37a91d91bd2","Type":"ContainerDied","Data":"9c12963e6394b6923825f9dea9b2630a9831e7466b499039a1347a7bbd90f77d"} Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.562890 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sw97l" event={"ID":"66e9b544-b66c-43d6-8d8d-d6231a70a6be","Type":"ContainerStarted","Data":"dc317316db19af2e86def6f0c6ff464880de675aa9c95af9c8601af310f15a4f"} Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.564254 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.564293 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:27 crc kubenswrapper[4889]: E0219 00:09:27.567842 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vgwtt" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" Feb 19 00:09:27 crc kubenswrapper[4889]: I0219 00:09:27.634515 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sw97l" podStartSLOduration=172.634496333 podStartE2EDuration="2m52.634496333s" podCreationTimestamp="2026-02-19 00:06:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:27.610883334 +0000 UTC m=+193.575548335" watchObservedRunningTime="2026-02-19 00:09:27.634496333 +0000 UTC m=+193.599161324" Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.570557 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6jh8" event={"ID":"f09a3256-3dd8-4e60-bee8-379678cf15f7","Type":"ContainerStarted","Data":"cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66"} Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.571180 4889 patch_prober.go:28] interesting pod/downloads-7954f5f757-fp64s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.571248 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fp64s" podUID="61f5ec49-57ce-4a4a-86b6-fed43a63c82e" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.637638 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6jh8" podStartSLOduration=3.994504634 podStartE2EDuration="44.637622046s" podCreationTimestamp="2026-02-19 00:08:44 +0000 UTC" firstStartedPulling="2026-02-19 00:08:46.768895616 +0000 UTC m=+152.733560607" lastFinishedPulling="2026-02-19 00:09:27.412013028 +0000 UTC m=+193.376678019" observedRunningTime="2026-02-19 00:09:28.636301146 +0000 UTC m=+194.600966217" watchObservedRunningTime="2026-02-19 00:09:28.637622046 +0000 UTC m=+194.602287037" Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.902903 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.958259 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kubelet-dir\") pod \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.958351 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42b45864-8ca3-42ae-9caf-b37a91d91bd2" (UID: "42b45864-8ca3-42ae-9caf-b37a91d91bd2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.958419 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kube-api-access\") pod \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\" (UID: \"42b45864-8ca3-42ae-9caf-b37a91d91bd2\") " Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.958711 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:28 crc kubenswrapper[4889]: I0219 00:09:28.974330 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42b45864-8ca3-42ae-9caf-b37a91d91bd2" (UID: "42b45864-8ca3-42ae-9caf-b37a91d91bd2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:09:29 crc kubenswrapper[4889]: I0219 00:09:29.059865 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42b45864-8ca3-42ae-9caf-b37a91d91bd2-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:29 crc kubenswrapper[4889]: I0219 00:09:29.577200 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 00:09:29 crc kubenswrapper[4889]: I0219 00:09:29.577290 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"42b45864-8ca3-42ae-9caf-b37a91d91bd2","Type":"ContainerDied","Data":"a89716eadd0c8a6f69765170532935703789bc88d67c1afe2688c251fb39e8a5"} Feb 19 00:09:29 crc kubenswrapper[4889]: I0219 00:09:29.577378 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a89716eadd0c8a6f69765170532935703789bc88d67c1afe2688c251fb39e8a5" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.572621 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 00:09:31 crc kubenswrapper[4889]: E0219 00:09:31.573527 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b45864-8ca3-42ae-9caf-b37a91d91bd2" containerName="pruner" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.573550 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b45864-8ca3-42ae-9caf-b37a91d91bd2" containerName="pruner" Feb 19 00:09:31 crc kubenswrapper[4889]: E0219 00:09:31.573570 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdda273-5648-4fa5-868c-48142d764012" containerName="image-pruner" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.573579 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdda273-5648-4fa5-868c-48142d764012" containerName="image-pruner" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.573769 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdda273-5648-4fa5-868c-48142d764012" containerName="image-pruner" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.573791 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b45864-8ca3-42ae-9caf-b37a91d91bd2" containerName="pruner" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.574532 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.577918 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.578451 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.583867 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.699778 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.699898 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-var-lock\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.699949 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kube-api-access\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.801024 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kube-api-access\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.801134 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.801194 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-var-lock\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.801300 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-var-lock\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.801308 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.818994 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kube-api-access\") pod \"installer-9-crc\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:31 crc kubenswrapper[4889]: I0219 00:09:31.899992 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:09:34 crc kubenswrapper[4889]: I0219 00:09:34.758799 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 00:09:34 crc kubenswrapper[4889]: W0219 00:09:34.770058 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9fdf3c01_72ec_4b9c_bbe8_378746a212de.slice/crio-b4d35a3e89a80fc725b4ffe752a502d43bb8bde80a11126485746a3d2e53c1e2 WatchSource:0}: Error finding container b4d35a3e89a80fc725b4ffe752a502d43bb8bde80a11126485746a3d2e53c1e2: Status 404 returned error can't find the container with id b4d35a3e89a80fc725b4ffe752a502d43bb8bde80a11126485746a3d2e53c1e2 Feb 19 00:09:35 crc kubenswrapper[4889]: I0219 00:09:35.378115 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:09:35 crc kubenswrapper[4889]: I0219 00:09:35.378181 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:09:35 crc kubenswrapper[4889]: I0219 00:09:35.618300 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9fdf3c01-72ec-4b9c-bbe8-378746a212de","Type":"ContainerStarted","Data":"b4d35a3e89a80fc725b4ffe752a502d43bb8bde80a11126485746a3d2e53c1e2"} Feb 19 00:09:36 crc kubenswrapper[4889]: I0219 00:09:36.627612 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9fdf3c01-72ec-4b9c-bbe8-378746a212de","Type":"ContainerStarted","Data":"3c4ab6960097f77a66a9e0ab8533f2774fbbe59aa4dd5992a0ad90891f43abfc"} Feb 19 00:09:36 crc kubenswrapper[4889]: I0219 00:09:36.629744 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerStarted","Data":"577bf83cb3b115714f3c337e82c8adbf3420acefbdc52bb1d314043055574fa6"} Feb 19 00:09:36 crc kubenswrapper[4889]: I0219 00:09:36.714046 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fp64s" Feb 19 00:09:36 crc kubenswrapper[4889]: I0219 00:09:36.974951 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.020620 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.652438 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.652420083 podStartE2EDuration="6.652420083s" podCreationTimestamp="2026-02-19 00:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:09:37.651511255 +0000 UTC m=+203.616176266" watchObservedRunningTime="2026-02-19 00:09:37.652420083 +0000 UTC m=+203.617085074" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.678974 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22jj8" podStartSLOduration=6.133055543 podStartE2EDuration="53.678953231s" podCreationTimestamp="2026-02-19 00:08:44 +0000 UTC" firstStartedPulling="2026-02-19 00:08:46.770780794 +0000 UTC m=+152.735445785" lastFinishedPulling="2026-02-19 00:09:34.316678472 +0000 UTC m=+200.281343473" observedRunningTime="2026-02-19 00:09:37.675060757 +0000 UTC m=+203.639725748" watchObservedRunningTime="2026-02-19 00:09:37.678953231 +0000 UTC m=+203.643618222" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.781604 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.781754 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.781839 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.783011 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:09:37 crc kubenswrapper[4889]: I0219 00:09:37.783091 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f" gracePeriod=600 Feb 19 00:09:38 crc kubenswrapper[4889]: I0219 00:09:38.641052 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f" exitCode=0 Feb 19 00:09:38 crc kubenswrapper[4889]: I0219 00:09:38.641400 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f"} Feb 19 00:09:45 crc kubenswrapper[4889]: I0219 00:09:45.316329 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:09:45 crc kubenswrapper[4889]: I0219 00:09:45.319210 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:09:45 crc kubenswrapper[4889]: I0219 00:09:45.366991 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:09:45 crc kubenswrapper[4889]: I0219 00:09:45.723696 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:09:46 crc kubenswrapper[4889]: I0219 00:09:46.221579 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22jj8"] Feb 19 00:09:47 crc kubenswrapper[4889]: I0219 00:09:47.691876 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22jj8" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="registry-server" containerID="cri-o://577bf83cb3b115714f3c337e82c8adbf3420acefbdc52bb1d314043055574fa6" gracePeriod=2 Feb 19 00:09:48 crc kubenswrapper[4889]: I0219 00:09:48.701559 4889 generic.go:334] "Generic (PLEG): container finished" podID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerID="577bf83cb3b115714f3c337e82c8adbf3420acefbdc52bb1d314043055574fa6" exitCode=0 Feb 19 00:09:48 crc kubenswrapper[4889]: I0219 00:09:48.701766 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerDied","Data":"577bf83cb3b115714f3c337e82c8adbf3420acefbdc52bb1d314043055574fa6"} Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.455205 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.548119 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-catalog-content\") pod \"c85a27e0-27e4-4df6-8773-fb60d29697ff\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.548195 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-utilities\") pod \"c85a27e0-27e4-4df6-8773-fb60d29697ff\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.548482 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b98x\" (UniqueName: \"kubernetes.io/projected/c85a27e0-27e4-4df6-8773-fb60d29697ff-kube-api-access-8b98x\") pod \"c85a27e0-27e4-4df6-8773-fb60d29697ff\" (UID: \"c85a27e0-27e4-4df6-8773-fb60d29697ff\") " Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.549369 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-utilities" (OuterVolumeSpecName: "utilities") pod "c85a27e0-27e4-4df6-8773-fb60d29697ff" (UID: "c85a27e0-27e4-4df6-8773-fb60d29697ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.550176 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.556314 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c85a27e0-27e4-4df6-8773-fb60d29697ff-kube-api-access-8b98x" (OuterVolumeSpecName: "kube-api-access-8b98x") pod "c85a27e0-27e4-4df6-8773-fb60d29697ff" (UID: "c85a27e0-27e4-4df6-8773-fb60d29697ff"). InnerVolumeSpecName "kube-api-access-8b98x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.600990 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c85a27e0-27e4-4df6-8773-fb60d29697ff" (UID: "c85a27e0-27e4-4df6-8773-fb60d29697ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.652127 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b98x\" (UniqueName: \"kubernetes.io/projected/c85a27e0-27e4-4df6-8773-fb60d29697ff-kube-api-access-8b98x\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.652182 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c85a27e0-27e4-4df6-8773-fb60d29697ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.728245 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22jj8" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.746457 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22jj8" event={"ID":"c85a27e0-27e4-4df6-8773-fb60d29697ff","Type":"ContainerDied","Data":"a0dbfd4ce634c7bca6fe7da4651235480a9a3605ef32f5b88761740a57de3e63"} Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.746523 4889 scope.go:117] "RemoveContainer" containerID="577bf83cb3b115714f3c337e82c8adbf3420acefbdc52bb1d314043055574fa6" Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.780667 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22jj8"] Feb 19 00:09:52 crc kubenswrapper[4889]: I0219 00:09:52.792069 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22jj8"] Feb 19 00:09:53 crc kubenswrapper[4889]: I0219 00:09:53.425339 4889 scope.go:117] "RemoveContainer" containerID="28d8a462947b32ba78d01aab56728a06f8baddb6453366ce8195ef16e5f2c442" Feb 19 00:09:53 crc kubenswrapper[4889]: I0219 00:09:53.453111 4889 scope.go:117] "RemoveContainer" containerID="ad5383c5ce66e0db0cc9f15e04ef6498e8a16c8db6ec90da31ea23a2cd088aed" Feb 19 00:09:53 crc kubenswrapper[4889]: I0219 00:09:53.735512 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"2231eeacede3ecd2782301c9a109e45033b30fb6e34ee69f5b69e52a119b5056"} Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.732167 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" path="/var/lib/kubelet/pods/c85a27e0-27e4-4df6-8773-fb60d29697ff/volumes" Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.744757 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerStarted","Data":"00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb"} Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.747298 4889 generic.go:334] "Generic (PLEG): container finished" podID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerID="0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1" exitCode=0 Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.747374 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7t7k" event={"ID":"e0bf937f-3956-40f1-9e52-d2000c46291c","Type":"ContainerDied","Data":"0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1"} Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.754602 4889 generic.go:334] "Generic (PLEG): container finished" podID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerID="42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534" exitCode=0 Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.754829 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhjcw" event={"ID":"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5","Type":"ContainerDied","Data":"42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534"} Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.761617 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerStarted","Data":"95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a"} Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.765539 4889 generic.go:334] "Generic (PLEG): container finished" podID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerID="0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32" exitCode=0 Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.765656 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpfj" event={"ID":"4841b44f-786c-4f3d-af26-c6ae5b08eee4","Type":"ContainerDied","Data":"0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32"} Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.776948 4889 generic.go:334] "Generic (PLEG): container finished" podID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerID="e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40" exitCode=0 Feb 19 00:09:54 crc kubenswrapper[4889]: I0219 00:09:54.777718 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgwtt" event={"ID":"1eb8ee4a-7192-4edc-a132-248289edb91f","Type":"ContainerDied","Data":"e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40"} Feb 19 00:09:55 crc kubenswrapper[4889]: I0219 00:09:55.789807 4889 generic.go:334] "Generic (PLEG): container finished" podID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerID="00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb" exitCode=0 Feb 19 00:09:55 crc kubenswrapper[4889]: I0219 00:09:55.789902 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerDied","Data":"00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb"} Feb 19 00:09:56 crc kubenswrapper[4889]: I0219 00:09:56.798795 4889 generic.go:334] "Generic (PLEG): container finished" podID="39c55028-d3da-4bad-92f6-34e5250a9276" containerID="95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a" exitCode=0 Feb 19 00:09:56 crc kubenswrapper[4889]: I0219 00:09:56.798859 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerDied","Data":"95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a"} Feb 19 00:10:02 crc kubenswrapper[4889]: I0219 00:10:02.840595 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7t7k" event={"ID":"e0bf937f-3956-40f1-9e52-d2000c46291c","Type":"ContainerStarted","Data":"74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f"} Feb 19 00:10:03 crc kubenswrapper[4889]: I0219 00:10:03.870032 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s7t7k" podStartSLOduration=5.412979635 podStartE2EDuration="1m18.870015731s" podCreationTimestamp="2026-02-19 00:08:45 +0000 UTC" firstStartedPulling="2026-02-19 00:08:46.773084374 +0000 UTC m=+152.737749375" lastFinishedPulling="2026-02-19 00:10:00.23012047 +0000 UTC m=+226.194785471" observedRunningTime="2026-02-19 00:10:03.866783119 +0000 UTC m=+229.831448110" watchObservedRunningTime="2026-02-19 00:10:03.870015731 +0000 UTC m=+229.834680712" Feb 19 00:10:05 crc kubenswrapper[4889]: I0219 00:10:05.517948 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:10:05 crc kubenswrapper[4889]: I0219 00:10:05.518024 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:10:05 crc kubenswrapper[4889]: I0219 00:10:05.560980 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.922844 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgwtt" event={"ID":"1eb8ee4a-7192-4edc-a132-248289edb91f","Type":"ContainerStarted","Data":"3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e"} Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.926944 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerStarted","Data":"fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d"} Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.932482 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhjcw" event={"ID":"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5","Type":"ContainerStarted","Data":"bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb"} Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.936112 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerStarted","Data":"4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4"} Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.938617 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpfj" event={"ID":"4841b44f-786c-4f3d-af26-c6ae5b08eee4","Type":"ContainerStarted","Data":"7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a"} Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.945919 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vgwtt" podStartSLOduration=5.009062886 podStartE2EDuration="1m27.945899531s" podCreationTimestamp="2026-02-19 00:08:46 +0000 UTC" firstStartedPulling="2026-02-19 00:08:48.972966788 +0000 UTC m=+154.937631779" lastFinishedPulling="2026-02-19 00:10:11.909803433 +0000 UTC m=+237.874468424" observedRunningTime="2026-02-19 00:10:13.944054643 +0000 UTC m=+239.908719664" watchObservedRunningTime="2026-02-19 00:10:13.945899531 +0000 UTC m=+239.910564522" Feb 19 00:10:13 crc kubenswrapper[4889]: I0219 00:10:13.971639 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ghpfj" podStartSLOduration=4.021958022 podStartE2EDuration="1m26.971607463s" podCreationTimestamp="2026-02-19 00:08:47 +0000 UTC" firstStartedPulling="2026-02-19 00:08:48.96820429 +0000 UTC m=+154.932869281" lastFinishedPulling="2026-02-19 00:10:11.917853731 +0000 UTC m=+237.882518722" observedRunningTime="2026-02-19 00:10:13.970846888 +0000 UTC m=+239.935511889" watchObservedRunningTime="2026-02-19 00:10:13.971607463 +0000 UTC m=+239.936272454" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.006467 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x2ncv" podStartSLOduration=6.090444977 podStartE2EDuration="1m26.006427176s" podCreationTimestamp="2026-02-19 00:08:48 +0000 UTC" firstStartedPulling="2026-02-19 00:08:50.029876141 +0000 UTC m=+155.994541132" lastFinishedPulling="2026-02-19 00:10:09.94585834 +0000 UTC m=+235.910523331" observedRunningTime="2026-02-19 00:10:14.001170448 +0000 UTC m=+239.965835459" watchObservedRunningTime="2026-02-19 00:10:14.006427176 +0000 UTC m=+239.971092167" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.026600 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jhjcw" podStartSLOduration=10.785391474 podStartE2EDuration="1m30.02657396s" podCreationTimestamp="2026-02-19 00:08:44 +0000 UTC" firstStartedPulling="2026-02-19 00:08:46.776792869 +0000 UTC m=+152.741457860" lastFinishedPulling="2026-02-19 00:10:06.017975355 +0000 UTC m=+231.982640346" observedRunningTime="2026-02-19 00:10:14.024630367 +0000 UTC m=+239.989295358" watchObservedRunningTime="2026-02-19 00:10:14.02657396 +0000 UTC m=+239.991238951" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.215030 4889 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.215368 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="registry-server" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.215386 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="registry-server" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.215406 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="extract-utilities" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.215414 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="extract-utilities" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.215431 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="extract-content" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.215439 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="extract-content" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.215561 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="c85a27e0-27e4-4df6-8773-fb60d29697ff" containerName="registry-server" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216053 4889 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216251 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216425 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7" gracePeriod=15 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216449 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5" gracePeriod=15 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216578 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc" gracePeriod=15 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216553 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7" gracePeriod=15 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.216555 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91" gracePeriod=15 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217202 4889 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217506 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217530 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217543 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217552 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217565 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217573 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217583 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217591 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217600 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217608 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217623 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217630 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.217646 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217656 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217780 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217799 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217808 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217818 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217831 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.217841 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.249263 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.249852 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.249911 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.249973 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.250025 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.250057 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.250095 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.250138 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.266705 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j6glr" podStartSLOduration=4.325813694 podStartE2EDuration="1m27.266675153s" podCreationTimestamp="2026-02-19 00:08:47 +0000 UTC" firstStartedPulling="2026-02-19 00:08:48.967968143 +0000 UTC m=+154.932633134" lastFinishedPulling="2026-02-19 00:10:11.908829602 +0000 UTC m=+237.873494593" observedRunningTime="2026-02-19 00:10:14.048633455 +0000 UTC m=+240.013298446" watchObservedRunningTime="2026-02-19 00:10:14.266675153 +0000 UTC m=+240.231340144" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.266992 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.351977 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352058 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352078 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352100 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352129 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352152 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352174 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352133 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352187 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352220 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352258 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352265 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352280 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352301 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352309 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.352348 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.562909 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:10:14 crc kubenswrapper[4889]: E0219 00:10:14.593018 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d50d6a2b712 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:10:14.59200181 +0000 UTC m=+240.556666811,LastTimestamp:2026-02-19 00:10:14.59200181 +0000 UTC m=+240.556666811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.731484 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.735591 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.918820 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.918875 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.948239 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.950246 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.952033 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5" exitCode=0 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.952073 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7" exitCode=0 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.952085 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc" exitCode=2 Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.952178 4889 scope.go:117] "RemoveContainer" containerID="bcae05d2f8c23d4b4e1a4d3be29d23fb75a1276449de170e3d52ac41a0268c47" Feb 19 00:10:14 crc kubenswrapper[4889]: I0219 00:10:14.960615 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"8acf3a432cf8aa37e9cbb1289ff752f21483229f84a133cc7d277fe6a20f20b3"} Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.554336 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.555013 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.555516 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.555997 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.558786 4889 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.558899 4889 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.559645 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="200ms" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.574210 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.575029 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.575824 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: E0219 00:10:15.760560 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="400ms" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.963161 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jhjcw" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="registry-server" probeResult="failure" output=< Feb 19 00:10:15 crc kubenswrapper[4889]: timeout: failed to connect service ":50051" within 1s Feb 19 00:10:15 crc kubenswrapper[4889]: > Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.969695 4889 generic.go:334] "Generic (PLEG): container finished" podID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" containerID="3c4ab6960097f77a66a9e0ab8533f2774fbbe59aa4dd5992a0ad90891f43abfc" exitCode=0 Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.969913 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9fdf3c01-72ec-4b9c-bbe8-378746a212de","Type":"ContainerDied","Data":"3c4ab6960097f77a66a9e0ab8533f2774fbbe59aa4dd5992a0ad90891f43abfc"} Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.971004 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.971375 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.971692 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.973204 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:10:15 crc kubenswrapper[4889]: I0219 00:10:15.973859 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91" exitCode=0 Feb 19 00:10:16 crc kubenswrapper[4889]: E0219 00:10:16.063214 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d50d6a2b712 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:10:14.59200181 +0000 UTC m=+240.556666811,LastTimestamp:2026-02-19 00:10:14.59200181 +0000 UTC m=+240.556666811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:16 crc kubenswrapper[4889]: E0219 00:10:16.161714 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="800ms" Feb 19 00:10:16 crc kubenswrapper[4889]: E0219 00:10:16.963048 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="1.6s" Feb 19 00:10:16 crc kubenswrapper[4889]: I0219 00:10:16.982459 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d"} Feb 19 00:10:16 crc kubenswrapper[4889]: I0219 00:10:16.983259 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:16 crc kubenswrapper[4889]: I0219 00:10:16.983474 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:16 crc kubenswrapper[4889]: I0219 00:10:16.983794 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:16 crc kubenswrapper[4889]: I0219 00:10:16.985475 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.073718 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.073803 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.123909 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.124925 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.125548 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.125759 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.126027 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.261823 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.263428 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.263679 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.263931 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.264153 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.408593 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kubelet-dir\") pod \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.409217 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kube-api-access\") pod \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.409469 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-var-lock\") pod \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\" (UID: \"9fdf3c01-72ec-4b9c-bbe8-378746a212de\") " Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.409964 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-var-lock" (OuterVolumeSpecName: "var-lock") pod "9fdf3c01-72ec-4b9c-bbe8-378746a212de" (UID: "9fdf3c01-72ec-4b9c-bbe8-378746a212de"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.410018 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9fdf3c01-72ec-4b9c-bbe8-378746a212de" (UID: "9fdf3c01-72ec-4b9c-bbe8-378746a212de"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.418155 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9fdf3c01-72ec-4b9c-bbe8-378746a212de" (UID: "9fdf3c01-72ec-4b9c-bbe8-378746a212de"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.498726 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.498854 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.512500 4889 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.512552 4889 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.512567 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9fdf3c01-72ec-4b9c-bbe8-378746a212de-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.562851 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.563406 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.563816 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.564152 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.564424 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.564720 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.994124 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9fdf3c01-72ec-4b9c-bbe8-378746a212de","Type":"ContainerDied","Data":"b4d35a3e89a80fc725b4ffe752a502d43bb8bde80a11126485746a3d2e53c1e2"} Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.994180 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4d35a3e89a80fc725b4ffe752a502d43bb8bde80a11126485746a3d2e53c1e2" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.994188 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 00:10:17 crc kubenswrapper[4889]: I0219 00:10:17.999586 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.000370 4889 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7" exitCode=0 Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.013049 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.013733 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.015182 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.015818 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.016131 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.048261 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.049154 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.049804 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.050435 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.052300 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.052476 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.055909 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.056288 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.056493 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.056732 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.056930 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.057359 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.083694 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.083755 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.179176 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.180315 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.181141 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.181520 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.182150 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.182467 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.182781 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.183087 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.324636 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.324839 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.324883 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.325062 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.325078 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.325192 4889 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.325210 4889 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.326173 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.426459 4889 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.521300 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.521593 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:10:18 crc kubenswrapper[4889]: E0219 00:10:18.565103 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="3.2s" Feb 19 00:10:18 crc kubenswrapper[4889]: I0219 00:10:18.733569 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.392520 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j6glr" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" probeResult="failure" output=< Feb 19 00:10:19 crc kubenswrapper[4889]: timeout: failed to connect service ":50051" within 1s Feb 19 00:10:19 crc kubenswrapper[4889]: > Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.403504 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.405467 4889 scope.go:117] "RemoveContainer" containerID="78ccc73f6ae9844771858dd29b971c90a45b1a4d2fc1b2b8bade06abc200bba5" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.405794 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.407140 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.407918 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.408297 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.409496 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.410675 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.410983 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.411876 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.412394 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.412666 4889 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.412859 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.414622 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.415121 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.430424 4889 scope.go:117] "RemoveContainer" containerID="b2a451eb83fda5dbfd55b1f3825282bd61c2f00320aea3d075c7549ee3abf4a7" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.450329 4889 scope.go:117] "RemoveContainer" containerID="eff68702644ef3c606fea237295febc6a6746cffe1ed10eef0f1cd5f29546f91" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.470292 4889 scope.go:117] "RemoveContainer" containerID="024dc52607decc49a8557bd7877d4ab6e682678a2a55d2935950bccb4cd082cc" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.496887 4889 scope.go:117] "RemoveContainer" containerID="a90a71fa5e1f52059ed64f5f0822637f5fd84a450f3fef143fa69f17c0dbfdd7" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.516964 4889 scope.go:117] "RemoveContainer" containerID="04fe1a40a4776298b903ab2a20e2ac353286a9e6fe0ed0c54daadb880fef2b34" Feb 19 00:10:19 crc kubenswrapper[4889]: I0219 00:10:19.562403 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x2ncv" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="registry-server" probeResult="failure" output=< Feb 19 00:10:19 crc kubenswrapper[4889]: timeout: failed to connect service ":50051" within 1s Feb 19 00:10:19 crc kubenswrapper[4889]: > Feb 19 00:10:21 crc kubenswrapper[4889]: E0219 00:10:21.725699 4889 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" volumeName="registry-storage" Feb 19 00:10:21 crc kubenswrapper[4889]: E0219 00:10:21.766942 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="6.4s" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.728138 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.728892 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.729102 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.729337 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.729589 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.967660 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.969667 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.970266 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.970622 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.970867 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.971057 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:24 crc kubenswrapper[4889]: I0219 00:10:24.971302 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.008127 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.008934 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.009525 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.009918 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.010286 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.011189 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:25 crc kubenswrapper[4889]: I0219 00:10:25.011570 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:26 crc kubenswrapper[4889]: E0219 00:10:26.064068 4889 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.148:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d50d6a2b712 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:10:14.59200181 +0000 UTC m=+240.556666811,LastTimestamp:2026-02-19 00:10:14.59200181 +0000 UTC m=+240.556666811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.214613 4889 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.215173 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.468866 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.468921 4889 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1" exitCode=1 Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.468963 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1"} Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.469703 4889 scope.go:117] "RemoveContainer" containerID="53a3a970a08c04659b69793e9cab5fdd138bfa38ccdf14b348cbfec1b63d78c1" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.471106 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.471668 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.472495 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.472804 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.473060 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.473292 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:27 crc kubenswrapper[4889]: I0219 00:10:27.473694 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.126692 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.127899 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.128341 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.128791 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.129299 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.129528 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.129771 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.130009 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.130320 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.164309 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.165124 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.165646 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.166075 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.166370 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.166642 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.167002 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.167280 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.167681 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: E0219 00:10:28.167682 4889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.148:6443: connect: connection refused" interval="7s" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.478381 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.478502 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a787e4f996491d44c8ad878e62bb5a40046f2521e59f5a673fc8d0c2a7632a60"} Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.480592 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.481334 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.482041 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.482493 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.482833 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.483121 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.483438 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.483753 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.560947 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.561961 4889 status_manager.go:851] "Failed to get status for pod" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" pod="openshift-marketplace/redhat-operators-x2ncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-x2ncv\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.562287 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.562651 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.563139 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.563678 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.564367 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.564710 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.565016 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.565452 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.600600 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.601266 4889 status_manager.go:851] "Failed to get status for pod" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" pod="openshift-marketplace/redhat-operators-x2ncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-x2ncv\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.601965 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.602609 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.602977 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.603539 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.603866 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.604137 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.604370 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.604640 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.724932 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.725992 4889 status_manager.go:851] "Failed to get status for pod" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" pod="openshift-marketplace/redhat-operators-x2ncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-x2ncv\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.726286 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.726549 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.726786 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.726966 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.727144 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.727342 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.727539 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.729469 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.743403 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.743454 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:28 crc kubenswrapper[4889]: E0219 00:10:28.744165 4889 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:28 crc kubenswrapper[4889]: I0219 00:10:28.744897 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:28 crc kubenswrapper[4889]: W0219 00:10:28.763799 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-22aa40face8fc4d6834e68bc63f3c51843e84a1738cd707b0c20393569127b7c WatchSource:0}: Error finding container 22aa40face8fc4d6834e68bc63f3c51843e84a1738cd707b0c20393569127b7c: Status 404 returned error can't find the container with id 22aa40face8fc4d6834e68bc63f3c51843e84a1738cd707b0c20393569127b7c Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.486022 4889 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="d3711fce6029589a4cf4de0c68d8f1571e4ae2e8ee208e5d8078dc76754aa218" exitCode=0 Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.486100 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"d3711fce6029589a4cf4de0c68d8f1571e4ae2e8ee208e5d8078dc76754aa218"} Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.486528 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"22aa40face8fc4d6834e68bc63f3c51843e84a1738cd707b0c20393569127b7c"} Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.486928 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.486946 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.487552 4889 status_manager.go:851] "Failed to get status for pod" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" pod="openshift-marketplace/redhat-marketplace-ghpfj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-ghpfj\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: E0219 00:10:29.487759 4889 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.487849 4889 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.488109 4889 status_manager.go:851] "Failed to get status for pod" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" pod="openshift-marketplace/redhat-marketplace-vgwtt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgwtt\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.488403 4889 status_manager.go:851] "Failed to get status for pod" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" pod="openshift-marketplace/redhat-operators-x2ncv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-x2ncv\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.488658 4889 status_manager.go:851] "Failed to get status for pod" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" pod="openshift-marketplace/community-operators-jhjcw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-jhjcw\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.489477 4889 status_manager.go:851] "Failed to get status for pod" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" pod="openshift-marketplace/redhat-operators-j6glr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-j6glr\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.490132 4889 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.490782 4889 status_manager.go:851] "Failed to get status for pod" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:29 crc kubenswrapper[4889]: I0219 00:10:29.491070 4889 status_manager.go:851] "Failed to get status for pod" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" pod="openshift-marketplace/certified-operators-s7t7k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-s7t7k\": dial tcp 38.102.83.148:6443: connect: connection refused" Feb 19 00:10:30 crc kubenswrapper[4889]: I0219 00:10:30.500141 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36dd7559443a5b9700a748f388ea346a7c9019fb631e90d99a76e633f4a27888"} Feb 19 00:10:30 crc kubenswrapper[4889]: I0219 00:10:30.500689 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fab51b17956fa08f74658b9cbc57faf0c2e6f0b612303333cb05180e25076712"} Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.207803 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.269969 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.280466 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.510913 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.510951 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.511102 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d05897f8f9b0d5e3f7e760058b0e5fc6e5faad4d212bd3d2e034f24868ec3bb5"} Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.511131 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eea470ee10a29ed9ab278aa820dabccb19707a397f377854ee2473c130081fe0"} Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.511141 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b98e6036dcffe1499e1c16ea9bdd9cfca94ea9aa479342eb2bfdb81a8b0803f"} Feb 19 00:10:31 crc kubenswrapper[4889]: I0219 00:10:31.511185 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:33 crc kubenswrapper[4889]: I0219 00:10:33.745312 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:33 crc kubenswrapper[4889]: I0219 00:10:33.745882 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:33 crc kubenswrapper[4889]: I0219 00:10:33.750664 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:36 crc kubenswrapper[4889]: I0219 00:10:36.532832 4889 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:37 crc kubenswrapper[4889]: I0219 00:10:37.556862 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:37 crc kubenswrapper[4889]: I0219 00:10:37.556910 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:37 crc kubenswrapper[4889]: I0219 00:10:37.563119 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:37 crc kubenswrapper[4889]: I0219 00:10:37.566928 4889 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d913b78-4247-4d78-af75-fd73e2a02c9d" Feb 19 00:10:38 crc kubenswrapper[4889]: I0219 00:10:38.566783 4889 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:38 crc kubenswrapper[4889]: I0219 00:10:38.566827 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bbefdc30-a268-4186-bf13-ea846011bd2c" Feb 19 00:10:41 crc kubenswrapper[4889]: I0219 00:10:41.213028 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:10:44 crc kubenswrapper[4889]: I0219 00:10:44.768299 4889 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d913b78-4247-4d78-af75-fd73e2a02c9d" Feb 19 00:10:46 crc kubenswrapper[4889]: I0219 00:10:46.578008 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 00:10:46 crc kubenswrapper[4889]: I0219 00:10:46.621401 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 00:10:46 crc kubenswrapper[4889]: I0219 00:10:46.881200 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.256803 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.410165 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.472033 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.690731 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.710047 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.743493 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.793137 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 00:10:47 crc kubenswrapper[4889]: I0219 00:10:47.870926 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.171955 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.262917 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.480503 4889 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.482850 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=34.482832545 podStartE2EDuration="34.482832545s" podCreationTimestamp="2026-02-19 00:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:36.464657045 +0000 UTC m=+262.429322036" watchObservedRunningTime="2026-02-19 00:10:48.482832545 +0000 UTC m=+274.447497536" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.485126 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.485168 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.490547 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.501460 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.501431899 podStartE2EDuration="12.501431899s" podCreationTimestamp="2026-02-19 00:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:10:48.499999344 +0000 UTC m=+274.464664335" watchObservedRunningTime="2026-02-19 00:10:48.501431899 +0000 UTC m=+274.466096880" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.515321 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.549517 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.632788 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.787557 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.825336 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.853718 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.882606 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 00:10:48 crc kubenswrapper[4889]: I0219 00:10:48.974921 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.068936 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.129289 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.200034 4889 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.203129 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.213468 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.213608 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.231622 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.381934 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.440424 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.492129 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.640825 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.691070 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.722754 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.777484 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.819257 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.900898 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.911484 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.914332 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 00:10:49 crc kubenswrapper[4889]: I0219 00:10:49.931325 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.022270 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.109833 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.127734 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.223040 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.367294 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.421127 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.434331 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.476773 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.490686 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.633153 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.768570 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.797482 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.834895 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.908659 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.914242 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 00:10:50 crc kubenswrapper[4889]: I0219 00:10:50.945972 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.145080 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.166882 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.167157 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.180689 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.198740 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.261059 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.366861 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.419386 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.427447 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.458495 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.521863 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.580807 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.677402 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.730663 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.745744 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.756182 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.774981 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.782957 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.821756 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 00:10:51 crc kubenswrapper[4889]: I0219 00:10:51.824069 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.080587 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.080950 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.120649 4889 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.144645 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.224637 4889 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.230318 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.250731 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.274866 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.314657 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.358730 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.459951 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.511235 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.536440 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.568413 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.627407 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.692427 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.708984 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.804918 4889 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.969339 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 00:10:52 crc kubenswrapper[4889]: I0219 00:10:52.987761 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.015765 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.060767 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.071171 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.096546 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.141660 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.298601 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.398936 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.420993 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.454847 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.538482 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.547913 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.576108 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.590348 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.759342 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.855500 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 00:10:53 crc kubenswrapper[4889]: I0219 00:10:53.925006 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.065059 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.153322 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.274588 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.328061 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.330620 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.347495 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.352210 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.386816 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.406399 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.413090 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.424399 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.468432 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.485860 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.509929 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.542418 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.644766 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.721540 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.762496 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.778976 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.826375 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.844013 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 00:10:54 crc kubenswrapper[4889]: I0219 00:10:54.918788 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.066134 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.083197 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.114711 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.129902 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.226109 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.326812 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.370472 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.443909 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.511991 4889 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.520346 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.551584 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.645839 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.707282 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.716593 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.771878 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.874739 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 00:10:55 crc kubenswrapper[4889]: I0219 00:10:55.874739 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.159092 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.227181 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.267587 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.309407 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.357463 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.373756 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.461957 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.476742 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.707001 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.733851 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.796032 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.826899 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.936856 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 00:10:56 crc kubenswrapper[4889]: I0219 00:10:56.960411 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.005959 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.069736 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.071518 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.089258 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.109727 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.126920 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.200649 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.266789 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.351674 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.352450 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.382800 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.431783 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.438385 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.479084 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.486410 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.542331 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.572228 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.665551 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.853461 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 00:10:57 crc kubenswrapper[4889]: I0219 00:10:57.986060 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.096377 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.171792 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.237296 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.368932 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.475467 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.503055 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.522595 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.576009 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.623750 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.693555 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.723398 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.739729 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.751004 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.870898 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.994143 4889 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:10:58 crc kubenswrapper[4889]: I0219 00:10:58.994394 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d" gracePeriod=5 Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.027773 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.070285 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.103021 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.121106 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.181429 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.273159 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.284066 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.296076 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.310373 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.320792 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.351629 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.410788 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.510849 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.614858 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.654498 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.685608 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.727056 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.729305 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.732588 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.822195 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.861943 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.888857 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 00:10:59 crc kubenswrapper[4889]: I0219 00:10:59.926560 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.094129 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.151319 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.222489 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.247150 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.295793 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.351760 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.377013 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.401767 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.428010 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.497179 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.498994 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.677689 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.747298 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.757206 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.928670 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 00:11:00 crc kubenswrapper[4889]: I0219 00:11:00.987801 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 00:11:01 crc kubenswrapper[4889]: I0219 00:11:01.005936 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 00:11:01 crc kubenswrapper[4889]: I0219 00:11:01.021363 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 00:11:01 crc kubenswrapper[4889]: I0219 00:11:01.046380 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 00:11:01 crc kubenswrapper[4889]: I0219 00:11:01.183855 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 00:11:01 crc kubenswrapper[4889]: I0219 00:11:01.248623 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 00:11:01 crc kubenswrapper[4889]: I0219 00:11:01.895043 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 00:11:02 crc kubenswrapper[4889]: I0219 00:11:02.098956 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 00:11:02 crc kubenswrapper[4889]: I0219 00:11:02.662566 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 00:11:03 crc kubenswrapper[4889]: I0219 00:11:03.375950 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.671388 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.671748 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.723493 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.723543 4889 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d" exitCode=137 Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.723595 4889 scope.go:117] "RemoveContainer" containerID="a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.723608 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.734308 4889 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.745549 4889 scope.go:117] "RemoveContainer" containerID="a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d" Feb 19 00:11:04 crc kubenswrapper[4889]: E0219 00:11:04.746101 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d\": container with ID starting with a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d not found: ID does not exist" containerID="a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.746158 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d"} err="failed to get container status \"a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d\": rpc error: code = NotFound desc = could not find container \"a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d\": container with ID starting with a291514d9a660059744f09e581ab3a2f7b8c3ec55201d45fa4f318bda5659e7d not found: ID does not exist" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.773843 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.773888 4889 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c1e77912-f084-4e7d-b33f-c3a273bc67e2" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.773913 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.773925 4889 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c1e77912-f084-4e7d-b33f-c3a273bc67e2" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.823659 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.823804 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.823916 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.823950 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.823994 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824011 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824125 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824155 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824198 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824731 4889 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824764 4889 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824778 4889 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.824793 4889 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.837434 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.877884 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 00:11:04 crc kubenswrapper[4889]: I0219 00:11:04.926444 4889 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:06 crc kubenswrapper[4889]: I0219 00:11:06.733617 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 00:11:14 crc kubenswrapper[4889]: I0219 00:11:14.428800 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:14 crc kubenswrapper[4889]: I0219 00:11:14.521909 4889 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.008715 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6jh8"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.010685 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6jh8" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="registry-server" containerID="cri-o://cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.018976 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7t7k"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.020915 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s7t7k" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="registry-server" containerID="cri-o://74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.029659 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhjcw"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.029980 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jhjcw" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="registry-server" containerID="cri-o://bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.047848 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfp24"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.048137 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" containerID="cri-o://1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.068880 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpfj"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.069242 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ghpfj" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="registry-server" containerID="cri-o://7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.079038 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgwtt"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.079326 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vgwtt" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="registry-server" containerID="cri-o://3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.096136 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w7fwf"] Feb 19 00:11:18 crc kubenswrapper[4889]: E0219 00:11:18.096660 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" containerName="installer" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.096681 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" containerName="installer" Feb 19 00:11:18 crc kubenswrapper[4889]: E0219 00:11:18.096704 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.096713 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.096891 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fdf3c01-72ec-4b9c-bbe8-378746a212de" containerName="installer" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.096907 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.097676 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.103414 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6glr"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.103954 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j6glr" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" containerID="cri-o://4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.108881 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2ncv"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.109333 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x2ncv" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="registry-server" containerID="cri-o://fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" gracePeriod=30 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.115742 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w7fwf"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.158896 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-j6glr" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" probeResult="failure" output="" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.168424 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-j6glr" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" probeResult="failure" output="" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.210834 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7be42c5f-0df1-4ab4-92d8-e47ca8047150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.210951 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgdr\" (UniqueName: \"kubernetes.io/projected/7be42c5f-0df1-4ab4-92d8-e47ca8047150-kube-api-access-bcgdr\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.211025 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7be42c5f-0df1-4ab4-92d8-e47ca8047150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.312422 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7be42c5f-0df1-4ab4-92d8-e47ca8047150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.312522 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7be42c5f-0df1-4ab4-92d8-e47ca8047150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.312660 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgdr\" (UniqueName: \"kubernetes.io/projected/7be42c5f-0df1-4ab4-92d8-e47ca8047150-kube-api-access-bcgdr\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.323081 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7be42c5f-0df1-4ab4-92d8-e47ca8047150-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.349700 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7be42c5f-0df1-4ab4-92d8-e47ca8047150-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.351193 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgdr\" (UniqueName: \"kubernetes.io/projected/7be42c5f-0df1-4ab4-92d8-e47ca8047150-kube-api-access-bcgdr\") pod \"marketplace-operator-79b997595-w7fwf\" (UID: \"7be42c5f-0df1-4ab4-92d8-e47ca8047150\") " pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: E0219 00:11:18.521901 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d is running failed: container process not found" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:11:18 crc kubenswrapper[4889]: E0219 00:11:18.522967 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d is running failed: container process not found" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:11:18 crc kubenswrapper[4889]: E0219 00:11:18.523305 4889 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d is running failed: container process not found" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" cmd=["grpc_health_probe","-addr=:50051"] Feb 19 00:11:18 crc kubenswrapper[4889]: E0219 00:11:18.523344 4889 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-x2ncv" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="registry-server" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.581935 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.645440 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.658436 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.669846 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.671580 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.674868 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.699680 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.723861 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.735556 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.736136 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.824795 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-utilities\") pod \"1eb8ee4a-7192-4edc-a132-248289edb91f\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.824886 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-catalog-content\") pod \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.824917 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-catalog-content\") pod \"39c55028-d3da-4bad-92f6-34e5250a9276\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.824950 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87j65\" (UniqueName: \"kubernetes.io/projected/1eb8ee4a-7192-4edc-a132-248289edb91f-kube-api-access-87j65\") pod \"1eb8ee4a-7192-4edc-a132-248289edb91f\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.824995 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcbzq\" (UniqueName: \"kubernetes.io/projected/f7a4c945-2b4c-4b30-ad06-9158ce04018e-kube-api-access-hcbzq\") pod \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825020 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-catalog-content\") pod \"5a2ae970-7f30-494c-ae36-bf25f120c59a\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825048 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d959\" (UniqueName: \"kubernetes.io/projected/4841b44f-786c-4f3d-af26-c6ae5b08eee4-kube-api-access-4d959\") pod \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825074 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf9r6\" (UniqueName: \"kubernetes.io/projected/f09a3256-3dd8-4e60-bee8-379678cf15f7-kube-api-access-vf9r6\") pod \"f09a3256-3dd8-4e60-bee8-379678cf15f7\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825104 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt756\" (UniqueName: \"kubernetes.io/projected/5a2ae970-7f30-494c-ae36-bf25f120c59a-kube-api-access-xt756\") pod \"5a2ae970-7f30-494c-ae36-bf25f120c59a\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825135 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-catalog-content\") pod \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825166 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcvt9\" (UniqueName: \"kubernetes.io/projected/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-kube-api-access-rcvt9\") pod \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825192 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-catalog-content\") pod \"e0bf937f-3956-40f1-9e52-d2000c46291c\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825233 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzlnw\" (UniqueName: \"kubernetes.io/projected/39c55028-d3da-4bad-92f6-34e5250a9276-kube-api-access-bzlnw\") pod \"39c55028-d3da-4bad-92f6-34e5250a9276\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825280 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flq2w\" (UniqueName: \"kubernetes.io/projected/e0bf937f-3956-40f1-9e52-d2000c46291c-kube-api-access-flq2w\") pod \"e0bf937f-3956-40f1-9e52-d2000c46291c\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825306 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-utilities\") pod \"5a2ae970-7f30-494c-ae36-bf25f120c59a\" (UID: \"5a2ae970-7f30-494c-ae36-bf25f120c59a\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825342 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-utilities\") pod \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\" (UID: \"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825365 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-utilities\") pod \"f09a3256-3dd8-4e60-bee8-379678cf15f7\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825393 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-utilities\") pod \"39c55028-d3da-4bad-92f6-34e5250a9276\" (UID: \"39c55028-d3da-4bad-92f6-34e5250a9276\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825427 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-catalog-content\") pod \"1eb8ee4a-7192-4edc-a132-248289edb91f\" (UID: \"1eb8ee4a-7192-4edc-a132-248289edb91f\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825452 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-catalog-content\") pod \"f09a3256-3dd8-4e60-bee8-379678cf15f7\" (UID: \"f09a3256-3dd8-4e60-bee8-379678cf15f7\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825484 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-utilities\") pod \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\" (UID: \"4841b44f-786c-4f3d-af26-c6ae5b08eee4\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825516 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-utilities\") pod \"e0bf937f-3956-40f1-9e52-d2000c46291c\" (UID: \"e0bf937f-3956-40f1-9e52-d2000c46291c\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825549 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-operator-metrics\") pod \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.825577 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-trusted-ca\") pod \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\" (UID: \"f7a4c945-2b4c-4b30-ad06-9158ce04018e\") " Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.832760 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-utilities" (OuterVolumeSpecName: "utilities") pod "1eb8ee4a-7192-4edc-a132-248289edb91f" (UID: "1eb8ee4a-7192-4edc-a132-248289edb91f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.834654 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.840034 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f09a3256-3dd8-4e60-bee8-379678cf15f7-kube-api-access-vf9r6" (OuterVolumeSpecName: "kube-api-access-vf9r6") pod "f09a3256-3dd8-4e60-bee8-379678cf15f7" (UID: "f09a3256-3dd8-4e60-bee8-379678cf15f7"). InnerVolumeSpecName "kube-api-access-vf9r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.842012 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-utilities" (OuterVolumeSpecName: "utilities") pod "4841b44f-786c-4f3d-af26-c6ae5b08eee4" (UID: "4841b44f-786c-4f3d-af26-c6ae5b08eee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.843098 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bf937f-3956-40f1-9e52-d2000c46291c-kube-api-access-flq2w" (OuterVolumeSpecName: "kube-api-access-flq2w") pod "e0bf937f-3956-40f1-9e52-d2000c46291c" (UID: "e0bf937f-3956-40f1-9e52-d2000c46291c"). InnerVolumeSpecName "kube-api-access-flq2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.843158 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4841b44f-786c-4f3d-af26-c6ae5b08eee4-kube-api-access-4d959" (OuterVolumeSpecName: "kube-api-access-4d959") pod "4841b44f-786c-4f3d-af26-c6ae5b08eee4" (UID: "4841b44f-786c-4f3d-af26-c6ae5b08eee4"). InnerVolumeSpecName "kube-api-access-4d959". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.844685 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-utilities" (OuterVolumeSpecName: "utilities") pod "5a2ae970-7f30-494c-ae36-bf25f120c59a" (UID: "5a2ae970-7f30-494c-ae36-bf25f120c59a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.845920 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-utilities" (OuterVolumeSpecName: "utilities") pod "f09a3256-3dd8-4e60-bee8-379678cf15f7" (UID: "f09a3256-3dd8-4e60-bee8-379678cf15f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.847114 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "f7a4c945-2b4c-4b30-ad06-9158ce04018e" (UID: "f7a4c945-2b4c-4b30-ad06-9158ce04018e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.847961 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-utilities" (OuterVolumeSpecName: "utilities") pod "0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" (UID: "0533f084-8fff-43b9-b7d6-a4fc0d6b85c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.850022 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c55028-d3da-4bad-92f6-34e5250a9276-kube-api-access-bzlnw" (OuterVolumeSpecName: "kube-api-access-bzlnw") pod "39c55028-d3da-4bad-92f6-34e5250a9276" (UID: "39c55028-d3da-4bad-92f6-34e5250a9276"). InnerVolumeSpecName "kube-api-access-bzlnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.850730 4889 generic.go:334] "Generic (PLEG): container finished" podID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerID="bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.850828 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhjcw" event={"ID":"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5","Type":"ContainerDied","Data":"bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.850869 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jhjcw" event={"ID":"0533f084-8fff-43b9-b7d6-a4fc0d6b85c5","Type":"ContainerDied","Data":"a93aa30a790eb336d47049b164a5d12a1d467eb80c9a3fa0130d6ddace141151"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.850894 4889 scope.go:117] "RemoveContainer" containerID="bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.851006 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jhjcw" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.851868 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2ae970-7f30-494c-ae36-bf25f120c59a-kube-api-access-xt756" (OuterVolumeSpecName: "kube-api-access-xt756") pod "5a2ae970-7f30-494c-ae36-bf25f120c59a" (UID: "5a2ae970-7f30-494c-ae36-bf25f120c59a"). InnerVolumeSpecName "kube-api-access-xt756". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.855489 4889 generic.go:334] "Generic (PLEG): container finished" podID="39c55028-d3da-4bad-92f6-34e5250a9276" containerID="4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.855561 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerDied","Data":"4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.855590 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6glr" event={"ID":"39c55028-d3da-4bad-92f6-34e5250a9276","Type":"ContainerDied","Data":"b5f5e77d3b9e7af5328489865c2479f1e029d11ec9ad30d9fc35e9da1f0b701d"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.855671 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6glr" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.858344 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-utilities" (OuterVolumeSpecName: "utilities") pod "39c55028-d3da-4bad-92f6-34e5250a9276" (UID: "39c55028-d3da-4bad-92f6-34e5250a9276"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.859888 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-utilities" (OuterVolumeSpecName: "utilities") pod "e0bf937f-3956-40f1-9e52-d2000c46291c" (UID: "e0bf937f-3956-40f1-9e52-d2000c46291c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.862825 4889 generic.go:334] "Generic (PLEG): container finished" podID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerID="7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.862942 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpfj" event={"ID":"4841b44f-786c-4f3d-af26-c6ae5b08eee4","Type":"ContainerDied","Data":"7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.862977 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ghpfj" event={"ID":"4841b44f-786c-4f3d-af26-c6ae5b08eee4","Type":"ContainerDied","Data":"b8e3e528b216a60b1e90c1fe9f8b18319987556764ac825f4edef64ea518f57d"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.862987 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ghpfj" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.880932 4889 generic.go:334] "Generic (PLEG): container finished" podID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerID="3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.881016 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgwtt" event={"ID":"1eb8ee4a-7192-4edc-a132-248289edb91f","Type":"ContainerDied","Data":"3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.881066 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgwtt" event={"ID":"1eb8ee4a-7192-4edc-a132-248289edb91f","Type":"ContainerDied","Data":"1b2cd812b1a88f5ce22470006d77674454a03a3bf349e3ed74b0528c593c345c"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.881176 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgwtt" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.889170 4889 scope.go:117] "RemoveContainer" containerID="42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.892885 4889 generic.go:334] "Generic (PLEG): container finished" podID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerID="cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.892967 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6jh8" event={"ID":"f09a3256-3dd8-4e60-bee8-379678cf15f7","Type":"ContainerDied","Data":"cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.893010 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6jh8" event={"ID":"f09a3256-3dd8-4e60-bee8-379678cf15f7","Type":"ContainerDied","Data":"30e6d7fca88c4c4a928e710bb2544a605a5e898a8d9f79dfe63ba5cb41b25032"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.892941 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6jh8" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.893800 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-kube-api-access-rcvt9" (OuterVolumeSpecName: "kube-api-access-rcvt9") pod "0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" (UID: "0533f084-8fff-43b9-b7d6-a4fc0d6b85c5"). InnerVolumeSpecName "kube-api-access-rcvt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.895342 4889 generic.go:334] "Generic (PLEG): container finished" podID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerID="1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.895375 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" event={"ID":"f7a4c945-2b4c-4b30-ad06-9158ce04018e","Type":"ContainerDied","Data":"1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.895405 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" event={"ID":"f7a4c945-2b4c-4b30-ad06-9158ce04018e","Type":"ContainerDied","Data":"8f76b004c90e6aea1851efb009a6f0aae167047d7f2d26d37f09702db4af622b"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.896113 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sfp24" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.898476 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb8ee4a-7192-4edc-a132-248289edb91f-kube-api-access-87j65" (OuterVolumeSpecName: "kube-api-access-87j65") pod "1eb8ee4a-7192-4edc-a132-248289edb91f" (UID: "1eb8ee4a-7192-4edc-a132-248289edb91f"). InnerVolumeSpecName "kube-api-access-87j65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.898965 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a4c945-2b4c-4b30-ad06-9158ce04018e-kube-api-access-hcbzq" (OuterVolumeSpecName: "kube-api-access-hcbzq") pod "f7a4c945-2b4c-4b30-ad06-9158ce04018e" (UID: "f7a4c945-2b4c-4b30-ad06-9158ce04018e"). InnerVolumeSpecName "kube-api-access-hcbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.899242 4889 generic.go:334] "Generic (PLEG): container finished" podID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerID="74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.899330 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7t7k" event={"ID":"e0bf937f-3956-40f1-9e52-d2000c46291c","Type":"ContainerDied","Data":"74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.899373 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7t7k" event={"ID":"e0bf937f-3956-40f1-9e52-d2000c46291c","Type":"ContainerDied","Data":"e11e3a33f69d61f8bfef3bc9350da5853008b0aec7d386a2e4f0b96ce67985c8"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.899520 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7t7k" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.900659 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4841b44f-786c-4f3d-af26-c6ae5b08eee4" (UID: "4841b44f-786c-4f3d-af26-c6ae5b08eee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.931110 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1eb8ee4a-7192-4edc-a132-248289edb91f" (UID: "1eb8ee4a-7192-4edc-a132-248289edb91f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937185 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzlnw\" (UniqueName: \"kubernetes.io/projected/39c55028-d3da-4bad-92f6-34e5250a9276-kube-api-access-bzlnw\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937276 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flq2w\" (UniqueName: \"kubernetes.io/projected/e0bf937f-3956-40f1-9e52-d2000c46291c-kube-api-access-flq2w\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937290 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937304 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937377 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937587 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937599 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eb8ee4a-7192-4edc-a132-248289edb91f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937627 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937637 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937658 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937668 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87j65\" (UniqueName: \"kubernetes.io/projected/1eb8ee4a-7192-4edc-a132-248289edb91f-kube-api-access-87j65\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937719 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcbzq\" (UniqueName: \"kubernetes.io/projected/f7a4c945-2b4c-4b30-ad06-9158ce04018e-kube-api-access-hcbzq\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937731 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d959\" (UniqueName: \"kubernetes.io/projected/4841b44f-786c-4f3d-af26-c6ae5b08eee4-kube-api-access-4d959\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937783 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf9r6\" (UniqueName: \"kubernetes.io/projected/f09a3256-3dd8-4e60-bee8-379678cf15f7-kube-api-access-vf9r6\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937794 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt756\" (UniqueName: \"kubernetes.io/projected/5a2ae970-7f30-494c-ae36-bf25f120c59a-kube-api-access-xt756\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937807 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4841b44f-786c-4f3d-af26-c6ae5b08eee4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.937858 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcvt9\" (UniqueName: \"kubernetes.io/projected/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-kube-api-access-rcvt9\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.971359 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w7fwf"] Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.991900 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "f7a4c945-2b4c-4b30-ad06-9158ce04018e" (UID: "f7a4c945-2b4c-4b30-ad06-9158ce04018e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.993586 4889 generic.go:334] "Generic (PLEG): container finished" podID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.993632 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x2ncv" Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.993642 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerDied","Data":"fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.994305 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x2ncv" event={"ID":"5a2ae970-7f30-494c-ae36-bf25f120c59a","Type":"ContainerDied","Data":"9df14ab96c581374efefe1cbbc25f7ac0cfb654fd90a5b8fb75a66e7397525ef"} Feb 19 00:11:18 crc kubenswrapper[4889]: I0219 00:11:18.999785 4889 scope.go:117] "RemoveContainer" containerID="57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7" Feb 19 00:11:19 crc kubenswrapper[4889]: W0219 00:11:19.006538 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7be42c5f_0df1_4ab4_92d8_e47ca8047150.slice/crio-d3f2204134b7db9fef9ed07cfa302708a765ef4fb8c549d3dc061e4fd4890bdb WatchSource:0}: Error finding container d3f2204134b7db9fef9ed07cfa302708a765ef4fb8c549d3dc061e4fd4890bdb: Status 404 returned error can't find the container with id d3f2204134b7db9fef9ed07cfa302708a765ef4fb8c549d3dc061e4fd4890bdb Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.039193 4889 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f7a4c945-2b4c-4b30-ad06-9158ce04018e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.047367 4889 scope.go:117] "RemoveContainer" containerID="bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.048036 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb\": container with ID starting with bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb not found: ID does not exist" containerID="bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.048079 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb"} err="failed to get container status \"bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb\": rpc error: code = NotFound desc = could not find container \"bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb\": container with ID starting with bc728c66c352efdfb8f1fd5470845d658d10609ec4bc34176bfc95cc2b0a30bb not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.048112 4889 scope.go:117] "RemoveContainer" containerID="42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.048726 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534\": container with ID starting with 42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534 not found: ID does not exist" containerID="42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.048753 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534"} err="failed to get container status \"42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534\": rpc error: code = NotFound desc = could not find container \"42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534\": container with ID starting with 42afff3de0fad31ca5249e19c314301ab48635e4f5854a6890b70cf6bfddc534 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.048769 4889 scope.go:117] "RemoveContainer" containerID="57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.049426 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7\": container with ID starting with 57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7 not found: ID does not exist" containerID="57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.049447 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7"} err="failed to get container status \"57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7\": rpc error: code = NotFound desc = could not find container \"57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7\": container with ID starting with 57c95d0bb390f2a82198e51694a8e027e8a12376f4e392cd4d2c925a67467fa7 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.049463 4889 scope.go:117] "RemoveContainer" containerID="4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.052885 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0bf937f-3956-40f1-9e52-d2000c46291c" (UID: "e0bf937f-3956-40f1-9e52-d2000c46291c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.057721 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f09a3256-3dd8-4e60-bee8-379678cf15f7" (UID: "f09a3256-3dd8-4e60-bee8-379678cf15f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.067579 4889 scope.go:117] "RemoveContainer" containerID="95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.071596 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" (UID: "0533f084-8fff-43b9-b7d6-a4fc0d6b85c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.103011 4889 scope.go:117] "RemoveContainer" containerID="ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.129560 4889 scope.go:117] "RemoveContainer" containerID="4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.131886 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4\": container with ID starting with 4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4 not found: ID does not exist" containerID="4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.131951 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4"} err="failed to get container status \"4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4\": rpc error: code = NotFound desc = could not find container \"4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4\": container with ID starting with 4bd2d6ff3efd8e2634c862cc5e2b1fcc79a727c2d745809c5a856946c7dc45a4 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.131994 4889 scope.go:117] "RemoveContainer" containerID="95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.132579 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a\": container with ID starting with 95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a not found: ID does not exist" containerID="95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.132633 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a"} err="failed to get container status \"95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a\": rpc error: code = NotFound desc = could not find container \"95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a\": container with ID starting with 95426e6004f706dc4680439fe550ee7b76fe6f15f544557b106b49d8b0aace7a not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.132678 4889 scope.go:117] "RemoveContainer" containerID="ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.133104 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411\": container with ID starting with ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411 not found: ID does not exist" containerID="ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.133192 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411"} err="failed to get container status \"ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411\": rpc error: code = NotFound desc = could not find container \"ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411\": container with ID starting with ae0d61f805f71ba307d2efc1e62c937c01f513f74d20c5c5af2df7f2a8225411 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.133290 4889 scope.go:117] "RemoveContainer" containerID="7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.141811 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f09a3256-3dd8-4e60-bee8-379678cf15f7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.141982 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.142014 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bf937f-3956-40f1-9e52-d2000c46291c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.151179 4889 scope.go:117] "RemoveContainer" containerID="0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.156426 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a2ae970-7f30-494c-ae36-bf25f120c59a" (UID: "5a2ae970-7f30-494c-ae36-bf25f120c59a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.166605 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39c55028-d3da-4bad-92f6-34e5250a9276" (UID: "39c55028-d3da-4bad-92f6-34e5250a9276"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.168388 4889 scope.go:117] "RemoveContainer" containerID="14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.230619 4889 scope.go:117] "RemoveContainer" containerID="7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.233846 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a\": container with ID starting with 7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a not found: ID does not exist" containerID="7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.233920 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a"} err="failed to get container status \"7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a\": rpc error: code = NotFound desc = could not find container \"7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a\": container with ID starting with 7b9a924938f1e274c1aaa83646bade727bba8012a3406155a70f2efb994b441a not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.233971 4889 scope.go:117] "RemoveContainer" containerID="0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.236278 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32\": container with ID starting with 0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32 not found: ID does not exist" containerID="0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.236459 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32"} err="failed to get container status \"0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32\": rpc error: code = NotFound desc = could not find container \"0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32\": container with ID starting with 0db283e6fb19b7ad8990464a58d899a34f4a2358e9ba0d9b52d747791da61e32 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.236572 4889 scope.go:117] "RemoveContainer" containerID="14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.236656 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jhjcw"] Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.237362 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea\": container with ID starting with 14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea not found: ID does not exist" containerID="14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.237565 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea"} err="failed to get container status \"14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea\": rpc error: code = NotFound desc = could not find container \"14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea\": container with ID starting with 14a04001777d626bb2164e3c21dca48f404d6698be3a5caa1d408134ba30ceea not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.237670 4889 scope.go:117] "RemoveContainer" containerID="3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.244352 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39c55028-d3da-4bad-92f6-34e5250a9276-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.244379 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2ae970-7f30-494c-ae36-bf25f120c59a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.246766 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jhjcw"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.261057 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgwtt"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.272777 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgwtt"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.273715 4889 scope.go:117] "RemoveContainer" containerID="e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.278636 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfp24"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.282053 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sfp24"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.294340 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7t7k"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.302642 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s7t7k"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.314432 4889 scope.go:117] "RemoveContainer" containerID="18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.328089 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6jh8"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.336011 4889 scope.go:117] "RemoveContainer" containerID="3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.337036 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e\": container with ID starting with 3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e not found: ID does not exist" containerID="3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.337113 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e"} err="failed to get container status \"3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e\": rpc error: code = NotFound desc = could not find container \"3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e\": container with ID starting with 3c9fe126e6ecb855ce145746ac076382d5d16177427b90cbb3ea72bea37ab27e not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.337150 4889 scope.go:117] "RemoveContainer" containerID="e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.337515 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40\": container with ID starting with e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40 not found: ID does not exist" containerID="e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.337657 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40"} err="failed to get container status \"e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40\": rpc error: code = NotFound desc = could not find container \"e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40\": container with ID starting with e1dc939efa585a54b9afb97cc51509c5175abe8c5135d837da900dd45af55b40 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.337793 4889 scope.go:117] "RemoveContainer" containerID="18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.338407 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700\": container with ID starting with 18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700 not found: ID does not exist" containerID="18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.338497 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700"} err="failed to get container status \"18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700\": rpc error: code = NotFound desc = could not find container \"18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700\": container with ID starting with 18ac9360fab88fbc5d844d867081064483b2089a62da423ee1c6488fcde1a700 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.338548 4889 scope.go:117] "RemoveContainer" containerID="cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.344329 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6jh8"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.353164 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpfj"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.358300 4889 scope.go:117] "RemoveContainer" containerID="90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.360708 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ghpfj"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.366891 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x2ncv"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.370887 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x2ncv"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.394270 4889 scope.go:117] "RemoveContainer" containerID="3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.412283 4889 scope.go:117] "RemoveContainer" containerID="cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.412888 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66\": container with ID starting with cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66 not found: ID does not exist" containerID="cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.413015 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66"} err="failed to get container status \"cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66\": rpc error: code = NotFound desc = could not find container \"cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66\": container with ID starting with cd160cb00078d2f719ff200ae12fa779d8811fff6063686446e31ae2973e8e66 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.413159 4889 scope.go:117] "RemoveContainer" containerID="90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.413868 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323\": container with ID starting with 90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323 not found: ID does not exist" containerID="90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.413911 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323"} err="failed to get container status \"90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323\": rpc error: code = NotFound desc = could not find container \"90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323\": container with ID starting with 90cbfce6b6e1caadd0b18f77acf70bcca77d6f31675b614294f8d989a5552323 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.413944 4889 scope.go:117] "RemoveContainer" containerID="3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.414207 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5\": container with ID starting with 3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5 not found: ID does not exist" containerID="3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.414242 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5"} err="failed to get container status \"3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5\": rpc error: code = NotFound desc = could not find container \"3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5\": container with ID starting with 3151cb5c3a61a21050841d129cecf518ae94b9c73f0bab7b2ee1f5b7585e34b5 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.414257 4889 scope.go:117] "RemoveContainer" containerID="1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.430757 4889 scope.go:117] "RemoveContainer" containerID="1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.431262 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e\": container with ID starting with 1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e not found: ID does not exist" containerID="1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.431326 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e"} err="failed to get container status \"1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e\": rpc error: code = NotFound desc = could not find container \"1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e\": container with ID starting with 1803c65c510f6bb77e16fcb70d81473acb5db0d6772dbedfe04615892d6cc34e not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.431362 4889 scope.go:117] "RemoveContainer" containerID="74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.447730 4889 scope.go:117] "RemoveContainer" containerID="0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.469190 4889 scope.go:117] "RemoveContainer" containerID="5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.488696 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6glr"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.489920 4889 scope.go:117] "RemoveContainer" containerID="74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.490541 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f\": container with ID starting with 74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f not found: ID does not exist" containerID="74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.490597 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f"} err="failed to get container status \"74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f\": rpc error: code = NotFound desc = could not find container \"74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f\": container with ID starting with 74051ce1ce5b919c77429d8645a0aefcd9232bf8bfcee9da2b9b27306c9ce90f not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.490637 4889 scope.go:117] "RemoveContainer" containerID="0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.490959 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1\": container with ID starting with 0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1 not found: ID does not exist" containerID="0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.490986 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1"} err="failed to get container status \"0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1\": rpc error: code = NotFound desc = could not find container \"0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1\": container with ID starting with 0f13f858c1fc99757a71bdb9a9055e0a3952f2aa811ceb50574dabd1945094f1 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.491000 4889 scope.go:117] "RemoveContainer" containerID="5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.491410 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288\": container with ID starting with 5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288 not found: ID does not exist" containerID="5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.491447 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288"} err="failed to get container status \"5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288\": rpc error: code = NotFound desc = could not find container \"5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288\": container with ID starting with 5ee9ffd23ed051ae7b3a9781a12b9aaabc93db34fa71bc4bdba7c368d2ae4288 not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.491476 4889 scope.go:117] "RemoveContainer" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.493467 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j6glr"] Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.508644 4889 scope.go:117] "RemoveContainer" containerID="00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.528717 4889 scope.go:117] "RemoveContainer" containerID="57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.544994 4889 scope.go:117] "RemoveContainer" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.546461 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d\": container with ID starting with fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d not found: ID does not exist" containerID="fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.546516 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d"} err="failed to get container status \"fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d\": rpc error: code = NotFound desc = could not find container \"fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d\": container with ID starting with fc63fc6c1db2f7b83686d66242d85ef7f2f2e4ecb2d728e36590f35fee43767d not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.546551 4889 scope.go:117] "RemoveContainer" containerID="00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.546922 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb\": container with ID starting with 00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb not found: ID does not exist" containerID="00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.546965 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb"} err="failed to get container status \"00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb\": rpc error: code = NotFound desc = could not find container \"00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb\": container with ID starting with 00232ca8695cc2433fc68b30ea532c9bed45dbcf2a90a5ebe319f266e6620bbb not found: ID does not exist" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.546983 4889 scope.go:117] "RemoveContainer" containerID="57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d" Feb 19 00:11:19 crc kubenswrapper[4889]: E0219 00:11:19.547256 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d\": container with ID starting with 57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d not found: ID does not exist" containerID="57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d" Feb 19 00:11:19 crc kubenswrapper[4889]: I0219 00:11:19.547287 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d"} err="failed to get container status \"57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d\": rpc error: code = NotFound desc = could not find container \"57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d\": container with ID starting with 57ad1879be5223f6358c1b7618afe9ee17053dd70f9d7bdd076ed24f94501a8d not found: ID does not exist" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.005796 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" event={"ID":"7be42c5f-0df1-4ab4-92d8-e47ca8047150","Type":"ContainerStarted","Data":"097e679be4bce89a93b98fc656d4b674b9bfa766ec2405fa6457559dd86707cb"} Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.005852 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" event={"ID":"7be42c5f-0df1-4ab4-92d8-e47ca8047150","Type":"ContainerStarted","Data":"d3f2204134b7db9fef9ed07cfa302708a765ef4fb8c549d3dc061e4fd4890bdb"} Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.006033 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.024383 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.033356 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w7fwf" podStartSLOduration=2.033331429 podStartE2EDuration="2.033331429s" podCreationTimestamp="2026-02-19 00:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:20.02895919 +0000 UTC m=+305.993624201" watchObservedRunningTime="2026-02-19 00:11:20.033331429 +0000 UTC m=+305.997996440" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.734413 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" path="/var/lib/kubelet/pods/0533f084-8fff-43b9-b7d6-a4fc0d6b85c5/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.735497 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" path="/var/lib/kubelet/pods/1eb8ee4a-7192-4edc-a132-248289edb91f/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.736071 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" path="/var/lib/kubelet/pods/39c55028-d3da-4bad-92f6-34e5250a9276/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.737156 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" path="/var/lib/kubelet/pods/4841b44f-786c-4f3d-af26-c6ae5b08eee4/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.737940 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" path="/var/lib/kubelet/pods/5a2ae970-7f30-494c-ae36-bf25f120c59a/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.738970 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" path="/var/lib/kubelet/pods/e0bf937f-3956-40f1-9e52-d2000c46291c/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.739568 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" path="/var/lib/kubelet/pods/f09a3256-3dd8-4e60-bee8-379678cf15f7/volumes" Feb 19 00:11:20 crc kubenswrapper[4889]: I0219 00:11:20.740544 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" path="/var/lib/kubelet/pods/f7a4c945-2b4c-4b30-ad06-9158ce04018e/volumes" Feb 19 00:11:33 crc kubenswrapper[4889]: I0219 00:11:33.374046 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xdn9r"] Feb 19 00:11:40 crc kubenswrapper[4889]: I0219 00:11:40.630973 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 00:11:47 crc kubenswrapper[4889]: I0219 00:11:47.901262 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p8bs5"] Feb 19 00:11:47 crc kubenswrapper[4889]: I0219 00:11:47.902192 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" podUID="dff75125-78d5-4ee7-8a76-64087e781dd3" containerName="controller-manager" containerID="cri-o://7926a52fa2d460f625d34d51af449d21237300b7185010c7fd73f70d9c5ce4c1" gracePeriod=30 Feb 19 00:11:47 crc kubenswrapper[4889]: I0219 00:11:47.995884 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn"] Feb 19 00:11:47 crc kubenswrapper[4889]: I0219 00:11:47.996776 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" podUID="316969ac-051e-4145-9536-e65bc7103089" containerName="route-controller-manager" containerID="cri-o://ce4fc427c40c4bbfcfccd0edc84c21d02795838442886226b32392cf9bc144f4" gracePeriod=30 Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.203409 4889 generic.go:334] "Generic (PLEG): container finished" podID="dff75125-78d5-4ee7-8a76-64087e781dd3" containerID="7926a52fa2d460f625d34d51af449d21237300b7185010c7fd73f70d9c5ce4c1" exitCode=0 Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.203565 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" event={"ID":"dff75125-78d5-4ee7-8a76-64087e781dd3","Type":"ContainerDied","Data":"7926a52fa2d460f625d34d51af449d21237300b7185010c7fd73f70d9c5ce4c1"} Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.214405 4889 generic.go:334] "Generic (PLEG): container finished" podID="316969ac-051e-4145-9536-e65bc7103089" containerID="ce4fc427c40c4bbfcfccd0edc84c21d02795838442886226b32392cf9bc144f4" exitCode=0 Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.214470 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" event={"ID":"316969ac-051e-4145-9536-e65bc7103089","Type":"ContainerDied","Data":"ce4fc427c40c4bbfcfccd0edc84c21d02795838442886226b32392cf9bc144f4"} Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.343316 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.387785 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532415 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-config\") pod \"dff75125-78d5-4ee7-8a76-64087e781dd3\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532529 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-proxy-ca-bundles\") pod \"dff75125-78d5-4ee7-8a76-64087e781dd3\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532588 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/316969ac-051e-4145-9536-e65bc7103089-serving-cert\") pod \"316969ac-051e-4145-9536-e65bc7103089\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532636 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24b9h\" (UniqueName: \"kubernetes.io/projected/dff75125-78d5-4ee7-8a76-64087e781dd3-kube-api-access-24b9h\") pod \"dff75125-78d5-4ee7-8a76-64087e781dd3\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532671 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff75125-78d5-4ee7-8a76-64087e781dd3-serving-cert\") pod \"dff75125-78d5-4ee7-8a76-64087e781dd3\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532705 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-client-ca\") pod \"dff75125-78d5-4ee7-8a76-64087e781dd3\" (UID: \"dff75125-78d5-4ee7-8a76-64087e781dd3\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532736 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcmpr\" (UniqueName: \"kubernetes.io/projected/316969ac-051e-4145-9536-e65bc7103089-kube-api-access-fcmpr\") pod \"316969ac-051e-4145-9536-e65bc7103089\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532762 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-config\") pod \"316969ac-051e-4145-9536-e65bc7103089\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.532799 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-client-ca\") pod \"316969ac-051e-4145-9536-e65bc7103089\" (UID: \"316969ac-051e-4145-9536-e65bc7103089\") " Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.533831 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-client-ca" (OuterVolumeSpecName: "client-ca") pod "316969ac-051e-4145-9536-e65bc7103089" (UID: "316969ac-051e-4145-9536-e65bc7103089"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.534609 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dff75125-78d5-4ee7-8a76-64087e781dd3" (UID: "dff75125-78d5-4ee7-8a76-64087e781dd3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.535241 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-config" (OuterVolumeSpecName: "config") pod "dff75125-78d5-4ee7-8a76-64087e781dd3" (UID: "dff75125-78d5-4ee7-8a76-64087e781dd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.535312 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-client-ca" (OuterVolumeSpecName: "client-ca") pod "dff75125-78d5-4ee7-8a76-64087e781dd3" (UID: "dff75125-78d5-4ee7-8a76-64087e781dd3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.535373 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-config" (OuterVolumeSpecName: "config") pod "316969ac-051e-4145-9536-e65bc7103089" (UID: "316969ac-051e-4145-9536-e65bc7103089"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.541082 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dff75125-78d5-4ee7-8a76-64087e781dd3-kube-api-access-24b9h" (OuterVolumeSpecName: "kube-api-access-24b9h") pod "dff75125-78d5-4ee7-8a76-64087e781dd3" (UID: "dff75125-78d5-4ee7-8a76-64087e781dd3"). InnerVolumeSpecName "kube-api-access-24b9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.541120 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/316969ac-051e-4145-9536-e65bc7103089-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "316969ac-051e-4145-9536-e65bc7103089" (UID: "316969ac-051e-4145-9536-e65bc7103089"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.541179 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/316969ac-051e-4145-9536-e65bc7103089-kube-api-access-fcmpr" (OuterVolumeSpecName: "kube-api-access-fcmpr") pod "316969ac-051e-4145-9536-e65bc7103089" (UID: "316969ac-051e-4145-9536-e65bc7103089"). InnerVolumeSpecName "kube-api-access-fcmpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.541472 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dff75125-78d5-4ee7-8a76-64087e781dd3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dff75125-78d5-4ee7-8a76-64087e781dd3" (UID: "dff75125-78d5-4ee7-8a76-64087e781dd3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634075 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634146 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/316969ac-051e-4145-9536-e65bc7103089-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634160 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24b9h\" (UniqueName: \"kubernetes.io/projected/dff75125-78d5-4ee7-8a76-64087e781dd3-kube-api-access-24b9h\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634177 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff75125-78d5-4ee7-8a76-64087e781dd3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634191 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634202 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcmpr\" (UniqueName: \"kubernetes.io/projected/316969ac-051e-4145-9536-e65bc7103089-kube-api-access-fcmpr\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634214 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634255 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/316969ac-051e-4145-9536-e65bc7103089-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:48 crc kubenswrapper[4889]: I0219 00:11:48.634266 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff75125-78d5-4ee7-8a76-64087e781dd3-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.223876 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" event={"ID":"dff75125-78d5-4ee7-8a76-64087e781dd3","Type":"ContainerDied","Data":"00e79541cbe82d0d789966c8bbb7ec3aba8548c9c9b9ecd325ddd9b496deeba3"} Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.223960 4889 scope.go:117] "RemoveContainer" containerID="7926a52fa2d460f625d34d51af449d21237300b7185010c7fd73f70d9c5ce4c1" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.224008 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p8bs5" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.227463 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" event={"ID":"316969ac-051e-4145-9536-e65bc7103089","Type":"ContainerDied","Data":"2513a276bc7ba5b31b0a8e2fd70cbe32c52ef1e404225f3bb7c6e0ea309dd0c1"} Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.227526 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.251826 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p8bs5"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.257900 4889 scope.go:117] "RemoveContainer" containerID="ce4fc427c40c4bbfcfccd0edc84c21d02795838442886226b32392cf9bc144f4" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.263020 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p8bs5"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.271831 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.276386 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w52zn"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.316002 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-vhkct"] Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.316754 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.316856 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.316931 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="316969ac-051e-4145-9536-e65bc7103089" containerName="route-controller-manager" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.317069 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="316969ac-051e-4145-9536-e65bc7103089" containerName="route-controller-manager" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.317162 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.317251 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.317341 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.317418 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.317498 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.317570 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.317663 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.317741 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.317841 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.317940 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318030 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318129 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318237 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318313 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318379 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318443 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318498 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dff75125-78d5-4ee7-8a76-64087e781dd3" containerName="controller-manager" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318552 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="dff75125-78d5-4ee7-8a76-64087e781dd3" containerName="controller-manager" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318653 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318744 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318809 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318862 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.318917 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.318969 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319022 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319085 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319146 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319200 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319277 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319340 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319411 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319470 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319532 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319616 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319722 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319792 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="extract-utilities" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319848 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.319900 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.319964 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320047 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.320137 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320268 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="extract-content" Feb 19 00:11:49 crc kubenswrapper[4889]: E0219 00:11:49.320353 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320425 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320731 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb8ee4a-7192-4edc-a132-248289edb91f" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320844 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="dff75125-78d5-4ee7-8a76-64087e781dd3" containerName="controller-manager" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320921 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0533f084-8fff-43b9-b7d6-a4fc0d6b85c5" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.320996 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f09a3256-3dd8-4e60-bee8-379678cf15f7" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.321068 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c55028-d3da-4bad-92f6-34e5250a9276" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.321136 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="4841b44f-786c-4f3d-af26-c6ae5b08eee4" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.321212 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="316969ac-051e-4145-9536-e65bc7103089" containerName="route-controller-manager" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.321301 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bf937f-3956-40f1-9e52-d2000c46291c" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.321358 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2ae970-7f30-494c-ae36-bf25f120c59a" containerName="registry-server" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.321422 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a4c945-2b4c-4b30-ad06-9158ce04018e" containerName="marketplace-operator" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.322012 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.327095 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.327471 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.327651 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.327832 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.327166 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.329405 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.335419 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-vhkct"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.342001 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.343066 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347153 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347698 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa211c1-771a-4ee8-a54b-3fb25271edc4-serving-cert\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347810 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7mq\" (UniqueName: \"kubernetes.io/projected/a93e08bf-88d1-4d28-bf7f-be06f499b509-kube-api-access-nn7mq\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347848 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-config\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347881 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a93e08bf-88d1-4d28-bf7f-be06f499b509-serving-cert\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347913 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-client-ca\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347903 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.347954 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk5xq\" (UniqueName: \"kubernetes.io/projected/eaa211c1-771a-4ee8-a54b-3fb25271edc4-kube-api-access-mk5xq\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.348327 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.348385 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-proxy-ca-bundles\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.348630 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-client-ca\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.348683 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-config\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.348903 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.349073 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.349506 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.354805 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.372564 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.450817 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-proxy-ca-bundles\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.450902 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-client-ca\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.450943 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-config\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.451006 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa211c1-771a-4ee8-a54b-3fb25271edc4-serving-cert\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.451065 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7mq\" (UniqueName: \"kubernetes.io/projected/a93e08bf-88d1-4d28-bf7f-be06f499b509-kube-api-access-nn7mq\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.451093 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-config\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.451120 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a93e08bf-88d1-4d28-bf7f-be06f499b509-serving-cert\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.451149 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-client-ca\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.451176 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk5xq\" (UniqueName: \"kubernetes.io/projected/eaa211c1-771a-4ee8-a54b-3fb25271edc4-kube-api-access-mk5xq\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.452455 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-client-ca\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.452482 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-proxy-ca-bundles\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.452925 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-config\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.453923 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-client-ca\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.454755 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-config\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.458259 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a93e08bf-88d1-4d28-bf7f-be06f499b509-serving-cert\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.465409 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa211c1-771a-4ee8-a54b-3fb25271edc4-serving-cert\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.478325 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7mq\" (UniqueName: \"kubernetes.io/projected/a93e08bf-88d1-4d28-bf7f-be06f499b509-kube-api-access-nn7mq\") pod \"controller-manager-56f548c9c8-vhkct\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.478414 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk5xq\" (UniqueName: \"kubernetes.io/projected/eaa211c1-771a-4ee8-a54b-3fb25271edc4-kube-api-access-mk5xq\") pod \"route-controller-manager-7f5776b745-zhwvm\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.637387 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.664017 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.885156 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-vhkct"] Feb 19 00:11:49 crc kubenswrapper[4889]: I0219 00:11:49.927885 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm"] Feb 19 00:11:49 crc kubenswrapper[4889]: W0219 00:11:49.933548 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaa211c1_771a_4ee8_a54b_3fb25271edc4.slice/crio-d0a04f136b7e1d4ed0e4fdd27f5a3f580c3230109d3a3c4327c989c776191453 WatchSource:0}: Error finding container d0a04f136b7e1d4ed0e4fdd27f5a3f580c3230109d3a3c4327c989c776191453: Status 404 returned error can't find the container with id d0a04f136b7e1d4ed0e4fdd27f5a3f580c3230109d3a3c4327c989c776191453 Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.237917 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" event={"ID":"eaa211c1-771a-4ee8-a54b-3fb25271edc4","Type":"ContainerStarted","Data":"a91c0eaba50235f11eac5acf565c72d9757c9ff9b74d78f3c285e892dccd3247"} Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.238039 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" event={"ID":"eaa211c1-771a-4ee8-a54b-3fb25271edc4","Type":"ContainerStarted","Data":"d0a04f136b7e1d4ed0e4fdd27f5a3f580c3230109d3a3c4327c989c776191453"} Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.238489 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.240149 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" event={"ID":"a93e08bf-88d1-4d28-bf7f-be06f499b509","Type":"ContainerStarted","Data":"08c1d81230c4083cabab0c31fa5f748cfa2ee878cc79ae234e712679f9c8916c"} Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.240192 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" event={"ID":"a93e08bf-88d1-4d28-bf7f-be06f499b509","Type":"ContainerStarted","Data":"e556d272de7de2cca952b8a98bb4f4aa6dcc3ecd1fb78bf4e8e66d2b51eadf50"} Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.240636 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.247871 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.261725 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" podStartSLOduration=1.261692957 podStartE2EDuration="1.261692957s" podCreationTimestamp="2026-02-19 00:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:50.257492977 +0000 UTC m=+336.222157988" watchObservedRunningTime="2026-02-19 00:11:50.261692957 +0000 UTC m=+336.226357948" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.293977 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" podStartSLOduration=1.293936962 podStartE2EDuration="1.293936962s" podCreationTimestamp="2026-02-19 00:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:50.288289627 +0000 UTC m=+336.252954638" watchObservedRunningTime="2026-02-19 00:11:50.293936962 +0000 UTC m=+336.258601953" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.617526 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.734739 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="316969ac-051e-4145-9536-e65bc7103089" path="/var/lib/kubelet/pods/316969ac-051e-4145-9536-e65bc7103089/volumes" Feb 19 00:11:50 crc kubenswrapper[4889]: I0219 00:11:50.735530 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dff75125-78d5-4ee7-8a76-64087e781dd3" path="/var/lib/kubelet/pods/dff75125-78d5-4ee7-8a76-64087e781dd3/volumes" Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.410368 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerName="oauth-openshift" containerID="cri-o://fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a" gracePeriod=15 Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.899929 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.942551 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-868547c79-5tdwd"] Feb 19 00:11:58 crc kubenswrapper[4889]: E0219 00:11:58.942787 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerName="oauth-openshift" Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.942800 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerName="oauth-openshift" Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.942914 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerName="oauth-openshift" Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.943276 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:58 crc kubenswrapper[4889]: I0219 00:11:58.962909 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-868547c79-5tdwd"] Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.095794 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-ocp-branding-template\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.095878 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-router-certs\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.095923 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-error\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.095948 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-serving-cert\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.095983 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65h2h\" (UniqueName: \"kubernetes.io/projected/f3d1db96-e34d-4c20-9556-edec9e27858c-kube-api-access-65h2h\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.096011 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-policies\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.096042 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-provider-selection\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.096105 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-service-ca\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.096137 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-cliconfig\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.098308 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.098905 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.098956 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-idp-0-file-data\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.099108 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-session\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.099520 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.099383 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-trusted-ca-bundle\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.099954 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-login\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.100123 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-dir\") pod \"f3d1db96-e34d-4c20-9556-edec9e27858c\" (UID: \"f3d1db96-e34d-4c20-9556-edec9e27858c\") " Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.102121 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.104462 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.104704 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-login\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.104805 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.104987 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-session\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.105405 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-router-certs\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.105495 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.105378 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.106108 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.105878 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-audit-dir\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.106404 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.106685 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xxb\" (UniqueName: \"kubernetes.io/projected/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-kube-api-access-24xxb\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.106809 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-service-ca\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.107648 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.107805 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.108150 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.108051 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-audit-policies\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.108375 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-error\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.108673 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.108723 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.109000 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.109410 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.110041 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112494 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112736 4889 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112769 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112853 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112895 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112914 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112939 4889 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112960 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112979 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.112996 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.113016 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.114092 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.118169 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d1db96-e34d-4c20-9556-edec9e27858c-kube-api-access-65h2h" (OuterVolumeSpecName: "kube-api-access-65h2h") pod "f3d1db96-e34d-4c20-9556-edec9e27858c" (UID: "f3d1db96-e34d-4c20-9556-edec9e27858c"). InnerVolumeSpecName "kube-api-access-65h2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.214985 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-error\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215078 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215126 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215203 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215247 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215282 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-session\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215303 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-login\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215333 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-router-certs\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215353 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215383 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-audit-dir\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215409 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24xxb\" (UniqueName: \"kubernetes.io/projected/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-kube-api-access-24xxb\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215430 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-service-ca\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215455 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215477 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-audit-policies\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215535 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215548 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65h2h\" (UniqueName: \"kubernetes.io/projected/f3d1db96-e34d-4c20-9556-edec9e27858c-kube-api-access-65h2h\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.215562 4889 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f3d1db96-e34d-4c20-9556-edec9e27858c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.217380 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-cliconfig\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.217612 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-audit-dir\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.217695 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-service-ca\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.217737 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.218497 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-audit-policies\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.219839 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-session\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.219617 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-error\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.220170 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-serving-cert\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.220625 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.220744 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.221061 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-router-certs\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.222360 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-user-template-login\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.223302 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.236041 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24xxb\" (UniqueName: \"kubernetes.io/projected/09fada8e-e2c6-4f65-b60d-95cf2bf72d93-kube-api-access-24xxb\") pod \"oauth-openshift-868547c79-5tdwd\" (UID: \"09fada8e-e2c6-4f65-b60d-95cf2bf72d93\") " pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.270009 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.301257 4889 generic.go:334] "Generic (PLEG): container finished" podID="f3d1db96-e34d-4c20-9556-edec9e27858c" containerID="fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a" exitCode=0 Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.301339 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" event={"ID":"f3d1db96-e34d-4c20-9556-edec9e27858c","Type":"ContainerDied","Data":"fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a"} Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.301400 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" event={"ID":"f3d1db96-e34d-4c20-9556-edec9e27858c","Type":"ContainerDied","Data":"77f619253b54c4214dae9fb45937fea91bf72595805ea565f92cd5dab4bda0cb"} Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.301428 4889 scope.go:117] "RemoveContainer" containerID="fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.301845 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xdn9r" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.324549 4889 scope.go:117] "RemoveContainer" containerID="fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a" Feb 19 00:11:59 crc kubenswrapper[4889]: E0219 00:11:59.326690 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a\": container with ID starting with fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a not found: ID does not exist" containerID="fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.326750 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a"} err="failed to get container status \"fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a\": rpc error: code = NotFound desc = could not find container \"fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a\": container with ID starting with fbf2c0465b7ee2cf1d64824eabd4f20714747f344a0dfc9655b39cc84ffecc9a not found: ID does not exist" Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.335754 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xdn9r"] Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.339227 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xdn9r"] Feb 19 00:11:59 crc kubenswrapper[4889]: I0219 00:11:59.688075 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-868547c79-5tdwd"] Feb 19 00:12:00 crc kubenswrapper[4889]: I0219 00:12:00.309338 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" event={"ID":"09fada8e-e2c6-4f65-b60d-95cf2bf72d93","Type":"ContainerStarted","Data":"ddac26bff726c61b313466198ef98a1e8ced1177adbc0e2b81267b9909db63fd"} Feb 19 00:12:00 crc kubenswrapper[4889]: I0219 00:12:00.309411 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" event={"ID":"09fada8e-e2c6-4f65-b60d-95cf2bf72d93","Type":"ContainerStarted","Data":"910b764abab21bd0cbe5689457bfef4dbdf23a51c742ed07615e8c15acb67e69"} Feb 19 00:12:00 crc kubenswrapper[4889]: I0219 00:12:00.309577 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:12:00 crc kubenswrapper[4889]: I0219 00:12:00.315980 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" Feb 19 00:12:00 crc kubenswrapper[4889]: I0219 00:12:00.334215 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-868547c79-5tdwd" podStartSLOduration=27.334187071 podStartE2EDuration="27.334187071s" podCreationTimestamp="2026-02-19 00:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:00.330492881 +0000 UTC m=+346.295157892" watchObservedRunningTime="2026-02-19 00:12:00.334187071 +0000 UTC m=+346.298852062" Feb 19 00:12:00 crc kubenswrapper[4889]: I0219 00:12:00.731550 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d1db96-e34d-4c20-9556-edec9e27858c" path="/var/lib/kubelet/pods/f3d1db96-e34d-4c20-9556-edec9e27858c/volumes" Feb 19 00:12:04 crc kubenswrapper[4889]: I0219 00:12:04.943172 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-vhkct"] Feb 19 00:12:04 crc kubenswrapper[4889]: I0219 00:12:04.943562 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" podUID="a93e08bf-88d1-4d28-bf7f-be06f499b509" containerName="controller-manager" containerID="cri-o://08c1d81230c4083cabab0c31fa5f748cfa2ee878cc79ae234e712679f9c8916c" gracePeriod=30 Feb 19 00:12:04 crc kubenswrapper[4889]: I0219 00:12:04.960210 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm"] Feb 19 00:12:04 crc kubenswrapper[4889]: I0219 00:12:04.960656 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" podUID="eaa211c1-771a-4ee8-a54b-3fb25271edc4" containerName="route-controller-manager" containerID="cri-o://a91c0eaba50235f11eac5acf565c72d9757c9ff9b74d78f3c285e892dccd3247" gracePeriod=30 Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.338510 4889 generic.go:334] "Generic (PLEG): container finished" podID="eaa211c1-771a-4ee8-a54b-3fb25271edc4" containerID="a91c0eaba50235f11eac5acf565c72d9757c9ff9b74d78f3c285e892dccd3247" exitCode=0 Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.339065 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" event={"ID":"eaa211c1-771a-4ee8-a54b-3fb25271edc4","Type":"ContainerDied","Data":"a91c0eaba50235f11eac5acf565c72d9757c9ff9b74d78f3c285e892dccd3247"} Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.340666 4889 generic.go:334] "Generic (PLEG): container finished" podID="a93e08bf-88d1-4d28-bf7f-be06f499b509" containerID="08c1d81230c4083cabab0c31fa5f748cfa2ee878cc79ae234e712679f9c8916c" exitCode=0 Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.340693 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" event={"ID":"a93e08bf-88d1-4d28-bf7f-be06f499b509","Type":"ContainerDied","Data":"08c1d81230c4083cabab0c31fa5f748cfa2ee878cc79ae234e712679f9c8916c"} Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.515546 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.519565 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.605882 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-client-ca\") pod \"a93e08bf-88d1-4d28-bf7f-be06f499b509\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606001 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk5xq\" (UniqueName: \"kubernetes.io/projected/eaa211c1-771a-4ee8-a54b-3fb25271edc4-kube-api-access-mk5xq\") pod \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606089 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-config\") pod \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606130 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa211c1-771a-4ee8-a54b-3fb25271edc4-serving-cert\") pod \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606187 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a93e08bf-88d1-4d28-bf7f-be06f499b509-serving-cert\") pod \"a93e08bf-88d1-4d28-bf7f-be06f499b509\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606236 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-proxy-ca-bundles\") pod \"a93e08bf-88d1-4d28-bf7f-be06f499b509\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606257 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-config\") pod \"a93e08bf-88d1-4d28-bf7f-be06f499b509\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606285 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn7mq\" (UniqueName: \"kubernetes.io/projected/a93e08bf-88d1-4d28-bf7f-be06f499b509-kube-api-access-nn7mq\") pod \"a93e08bf-88d1-4d28-bf7f-be06f499b509\" (UID: \"a93e08bf-88d1-4d28-bf7f-be06f499b509\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.606316 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-client-ca\") pod \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\" (UID: \"eaa211c1-771a-4ee8-a54b-3fb25271edc4\") " Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.607211 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a93e08bf-88d1-4d28-bf7f-be06f499b509" (UID: "a93e08bf-88d1-4d28-bf7f-be06f499b509"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.607471 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-client-ca" (OuterVolumeSpecName: "client-ca") pod "eaa211c1-771a-4ee8-a54b-3fb25271edc4" (UID: "eaa211c1-771a-4ee8-a54b-3fb25271edc4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.607478 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-client-ca" (OuterVolumeSpecName: "client-ca") pod "a93e08bf-88d1-4d28-bf7f-be06f499b509" (UID: "a93e08bf-88d1-4d28-bf7f-be06f499b509"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.607592 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-config" (OuterVolumeSpecName: "config") pod "a93e08bf-88d1-4d28-bf7f-be06f499b509" (UID: "a93e08bf-88d1-4d28-bf7f-be06f499b509"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.607821 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-config" (OuterVolumeSpecName: "config") pod "eaa211c1-771a-4ee8-a54b-3fb25271edc4" (UID: "eaa211c1-771a-4ee8-a54b-3fb25271edc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.614531 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93e08bf-88d1-4d28-bf7f-be06f499b509-kube-api-access-nn7mq" (OuterVolumeSpecName: "kube-api-access-nn7mq") pod "a93e08bf-88d1-4d28-bf7f-be06f499b509" (UID: "a93e08bf-88d1-4d28-bf7f-be06f499b509"). InnerVolumeSpecName "kube-api-access-nn7mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.616496 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaa211c1-771a-4ee8-a54b-3fb25271edc4-kube-api-access-mk5xq" (OuterVolumeSpecName: "kube-api-access-mk5xq") pod "eaa211c1-771a-4ee8-a54b-3fb25271edc4" (UID: "eaa211c1-771a-4ee8-a54b-3fb25271edc4"). InnerVolumeSpecName "kube-api-access-mk5xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.616540 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaa211c1-771a-4ee8-a54b-3fb25271edc4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eaa211c1-771a-4ee8-a54b-3fb25271edc4" (UID: "eaa211c1-771a-4ee8-a54b-3fb25271edc4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.616718 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93e08bf-88d1-4d28-bf7f-be06f499b509-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a93e08bf-88d1-4d28-bf7f-be06f499b509" (UID: "a93e08bf-88d1-4d28-bf7f-be06f499b509"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707547 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa211c1-771a-4ee8-a54b-3fb25271edc4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707680 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a93e08bf-88d1-4d28-bf7f-be06f499b509-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707703 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707715 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707729 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn7mq\" (UniqueName: \"kubernetes.io/projected/a93e08bf-88d1-4d28-bf7f-be06f499b509-kube-api-access-nn7mq\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707741 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707751 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a93e08bf-88d1-4d28-bf7f-be06f499b509-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707764 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk5xq\" (UniqueName: \"kubernetes.io/projected/eaa211c1-771a-4ee8-a54b-3fb25271edc4-kube-api-access-mk5xq\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:05 crc kubenswrapper[4889]: I0219 00:12:05.707781 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa211c1-771a-4ee8-a54b-3fb25271edc4-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.348491 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" event={"ID":"a93e08bf-88d1-4d28-bf7f-be06f499b509","Type":"ContainerDied","Data":"e556d272de7de2cca952b8a98bb4f4aa6dcc3ecd1fb78bf4e8e66d2b51eadf50"} Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.348538 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f548c9c8-vhkct" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.348900 4889 scope.go:117] "RemoveContainer" containerID="08c1d81230c4083cabab0c31fa5f748cfa2ee878cc79ae234e712679f9c8916c" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.351988 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" event={"ID":"eaa211c1-771a-4ee8-a54b-3fb25271edc4","Type":"ContainerDied","Data":"d0a04f136b7e1d4ed0e4fdd27f5a3f580c3230109d3a3c4327c989c776191453"} Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.352077 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.370560 4889 scope.go:117] "RemoveContainer" containerID="a91c0eaba50235f11eac5acf565c72d9757c9ff9b74d78f3c285e892dccd3247" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.383914 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-vhkct"] Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.389864 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-vhkct"] Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.396853 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm"] Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.402430 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-zhwvm"] Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.732520 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93e08bf-88d1-4d28-bf7f-be06f499b509" path="/var/lib/kubelet/pods/a93e08bf-88d1-4d28-bf7f-be06f499b509/volumes" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.733186 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaa211c1-771a-4ee8-a54b-3fb25271edc4" path="/var/lib/kubelet/pods/eaa211c1-771a-4ee8-a54b-3fb25271edc4/volumes" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.946781 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-9kpd9"] Feb 19 00:12:06 crc kubenswrapper[4889]: E0219 00:12:06.947569 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaa211c1-771a-4ee8-a54b-3fb25271edc4" containerName="route-controller-manager" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.947597 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaa211c1-771a-4ee8-a54b-3fb25271edc4" containerName="route-controller-manager" Feb 19 00:12:06 crc kubenswrapper[4889]: E0219 00:12:06.947619 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93e08bf-88d1-4d28-bf7f-be06f499b509" containerName="controller-manager" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.947631 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93e08bf-88d1-4d28-bf7f-be06f499b509" containerName="controller-manager" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.947898 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaa211c1-771a-4ee8-a54b-3fb25271edc4" containerName="route-controller-manager" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.947920 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93e08bf-88d1-4d28-bf7f-be06f499b509" containerName="controller-manager" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.948897 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.955535 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq"] Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.959646 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.959751 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.963733 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.963864 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.963885 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.964146 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.965286 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.965652 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.965852 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.965978 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.966133 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.966671 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.970932 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.971805 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.987042 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-9kpd9"] Feb 19 00:12:06 crc kubenswrapper[4889]: I0219 00:12:06.992209 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq"] Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127562 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-config\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127688 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-config\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127725 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-client-ca\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127757 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-proxy-ca-bundles\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127779 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-client-ca\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127894 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmsch\" (UniqueName: \"kubernetes.io/projected/25845a04-fe5d-46cf-b166-aa983f8e6e4e-kube-api-access-fmsch\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.127994 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e353cd71-4a24-47b8-8cf7-83fae9f6df96-serving-cert\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.128041 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25845a04-fe5d-46cf-b166-aa983f8e6e4e-serving-cert\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.128070 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8n96\" (UniqueName: \"kubernetes.io/projected/e353cd71-4a24-47b8-8cf7-83fae9f6df96-kube-api-access-z8n96\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229257 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-config\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229319 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-client-ca\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229349 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-proxy-ca-bundles\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229368 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-client-ca\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229390 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmsch\" (UniqueName: \"kubernetes.io/projected/25845a04-fe5d-46cf-b166-aa983f8e6e4e-kube-api-access-fmsch\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229421 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e353cd71-4a24-47b8-8cf7-83fae9f6df96-serving-cert\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229447 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25845a04-fe5d-46cf-b166-aa983f8e6e4e-serving-cert\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229467 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8n96\" (UniqueName: \"kubernetes.io/projected/e353cd71-4a24-47b8-8cf7-83fae9f6df96-kube-api-access-z8n96\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.229494 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-config\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.230759 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-config\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.231025 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-config\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.231374 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-client-ca\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.231766 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-client-ca\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.232477 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-proxy-ca-bundles\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.235815 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e353cd71-4a24-47b8-8cf7-83fae9f6df96-serving-cert\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.236652 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25845a04-fe5d-46cf-b166-aa983f8e6e4e-serving-cert\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.248633 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8n96\" (UniqueName: \"kubernetes.io/projected/e353cd71-4a24-47b8-8cf7-83fae9f6df96-kube-api-access-z8n96\") pod \"controller-manager-75b4f6d956-9kpd9\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.258402 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmsch\" (UniqueName: \"kubernetes.io/projected/25845a04-fe5d-46cf-b166-aa983f8e6e4e-kube-api-access-fmsch\") pod \"route-controller-manager-9d8c44798-s5fcq\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.292111 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.301037 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.491877 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-9kpd9"] Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.563746 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq"] Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.781819 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:12:07 crc kubenswrapper[4889]: I0219 00:12:07.781908 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.369489 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" event={"ID":"e353cd71-4a24-47b8-8cf7-83fae9f6df96","Type":"ContainerStarted","Data":"7d994deaee0067ac1dc457d35178d5a5d9fc5ec30bfc0d857869a5def766baaf"} Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.370076 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" event={"ID":"e353cd71-4a24-47b8-8cf7-83fae9f6df96","Type":"ContainerStarted","Data":"26a54f1c011cc42daa3d2d68554f07f5e40c1566a93624f4ec63bc32b464835a"} Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.370107 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.372586 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" event={"ID":"25845a04-fe5d-46cf-b166-aa983f8e6e4e","Type":"ContainerStarted","Data":"37da213e39180a7a115670b6a20385c54e411360acd4aadcb3f6b161eb497e43"} Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.372647 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" event={"ID":"25845a04-fe5d-46cf-b166-aa983f8e6e4e","Type":"ContainerStarted","Data":"2eee7c3b6df73f2dbebcb36f6c8ee2ccdbe87de97eb8abbf97c03275851a2497"} Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.372908 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.376905 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.378187 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.393799 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" podStartSLOduration=4.3937732369999996 podStartE2EDuration="4.393773237s" podCreationTimestamp="2026-02-19 00:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:08.389438472 +0000 UTC m=+354.354103473" watchObservedRunningTime="2026-02-19 00:12:08.393773237 +0000 UTC m=+354.358438228" Feb 19 00:12:08 crc kubenswrapper[4889]: I0219 00:12:08.434586 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" podStartSLOduration=4.434553935 podStartE2EDuration="4.434553935s" podCreationTimestamp="2026-02-19 00:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:08.430296254 +0000 UTC m=+354.394961255" watchObservedRunningTime="2026-02-19 00:12:08.434553935 +0000 UTC m=+354.399218946" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.514766 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dsvjd"] Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.517117 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.520537 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.526186 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsvjd"] Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.676622 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-catalog-content\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.676692 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5vx\" (UniqueName: \"kubernetes.io/projected/19c688b4-e818-423b-8a30-79fc47b8671d-kube-api-access-gd5vx\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.676794 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-utilities\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.706257 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94ccv"] Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.707626 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.710533 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.729277 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94ccv"] Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.778615 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-utilities\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.778671 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-catalog-content\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.778957 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5vx\" (UniqueName: \"kubernetes.io/projected/19c688b4-e818-423b-8a30-79fc47b8671d-kube-api-access-gd5vx\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.779118 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-catalog-content\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.779635 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-catalog-content\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.779483 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-utilities\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.779737 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr7wt\" (UniqueName: \"kubernetes.io/projected/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-kube-api-access-xr7wt\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.779888 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-utilities\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.801421 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5vx\" (UniqueName: \"kubernetes.io/projected/19c688b4-e818-423b-8a30-79fc47b8671d-kube-api-access-gd5vx\") pod \"redhat-marketplace-dsvjd\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.846145 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.881504 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr7wt\" (UniqueName: \"kubernetes.io/projected/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-kube-api-access-xr7wt\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.881597 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-utilities\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.881666 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-catalog-content\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.882467 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-catalog-content\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.882878 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-utilities\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:17 crc kubenswrapper[4889]: I0219 00:12:17.902436 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr7wt\" (UniqueName: \"kubernetes.io/projected/6a80ba53-e06d-4dfc-b26c-6f251cb67d26-kube-api-access-xr7wt\") pod \"redhat-operators-94ccv\" (UID: \"6a80ba53-e06d-4dfc-b26c-6f251cb67d26\") " pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.025259 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.148709 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c78tp"] Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.159306 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.169282 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c78tp"] Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.288711 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlp67\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-kube-api-access-xlp67\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289273 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-bound-sa-token\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289308 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289366 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289491 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-trusted-ca\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289553 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289595 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-registry-certificates\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.289647 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-registry-tls\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.339349 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.362913 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsvjd"] Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.390953 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlp67\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-kube-api-access-xlp67\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.391086 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-bound-sa-token\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.391127 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.391174 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.391209 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-trusted-ca\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.391262 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-registry-certificates\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.391317 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-registry-tls\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.393004 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-trusted-ca\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.393332 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.393538 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-registry-certificates\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.400613 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.403032 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-registry-tls\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.413077 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-bound-sa-token\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.413510 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlp67\" (UniqueName: \"kubernetes.io/projected/3eccf2e1-8ed2-4c86-aace-8bf67778fbbb-kube-api-access-xlp67\") pod \"image-registry-66df7c8f76-c78tp\" (UID: \"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb\") " pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.434336 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsvjd" event={"ID":"19c688b4-e818-423b-8a30-79fc47b8671d","Type":"ContainerStarted","Data":"286d6078977bf79ea707d9cad390a036e6c4b66b799d8fc1f2495393d1bb9698"} Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.495213 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.616272 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94ccv"] Feb 19 00:12:18 crc kubenswrapper[4889]: W0219 00:12:18.640800 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a80ba53_e06d_4dfc_b26c_6f251cb67d26.slice/crio-a7095fc15116bbebe0272bf0c4feff13aaf7b9390823ae6771b622b25f614933 WatchSource:0}: Error finding container a7095fc15116bbebe0272bf0c4feff13aaf7b9390823ae6771b622b25f614933: Status 404 returned error can't find the container with id a7095fc15116bbebe0272bf0c4feff13aaf7b9390823ae6771b622b25f614933 Feb 19 00:12:18 crc kubenswrapper[4889]: E0219 00:12:18.681004 4889 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19c688b4_e818_423b_8a30_79fc47b8671d.slice/crio-conmon-bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0.scope\": RecentStats: unable to find data in memory cache]" Feb 19 00:12:18 crc kubenswrapper[4889]: I0219 00:12:18.970934 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c78tp"] Feb 19 00:12:18 crc kubenswrapper[4889]: W0219 00:12:18.984586 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eccf2e1_8ed2_4c86_aace_8bf67778fbbb.slice/crio-8ab83b33c0bfeccf37e4c39416e247a759294fffc20e044b425106c398d7a875 WatchSource:0}: Error finding container 8ab83b33c0bfeccf37e4c39416e247a759294fffc20e044b425106c398d7a875: Status 404 returned error can't find the container with id 8ab83b33c0bfeccf37e4c39416e247a759294fffc20e044b425106c398d7a875 Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.443881 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" event={"ID":"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb","Type":"ContainerStarted","Data":"8ab83b33c0bfeccf37e4c39416e247a759294fffc20e044b425106c398d7a875"} Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.447931 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a80ba53-e06d-4dfc-b26c-6f251cb67d26" containerID="ad69bd216b97427e856c54c912a7710730a2b85b51a0a7df49056ab551e77d32" exitCode=0 Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.448021 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ccv" event={"ID":"6a80ba53-e06d-4dfc-b26c-6f251cb67d26","Type":"ContainerDied","Data":"ad69bd216b97427e856c54c912a7710730a2b85b51a0a7df49056ab551e77d32"} Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.448065 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ccv" event={"ID":"6a80ba53-e06d-4dfc-b26c-6f251cb67d26","Type":"ContainerStarted","Data":"a7095fc15116bbebe0272bf0c4feff13aaf7b9390823ae6771b622b25f614933"} Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.452559 4889 generic.go:334] "Generic (PLEG): container finished" podID="19c688b4-e818-423b-8a30-79fc47b8671d" containerID="bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0" exitCode=0 Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.452611 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsvjd" event={"ID":"19c688b4-e818-423b-8a30-79fc47b8671d","Type":"ContainerDied","Data":"bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0"} Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.906589 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lqbdl"] Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.908107 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.910729 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 00:12:19 crc kubenswrapper[4889]: I0219 00:12:19.920640 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqbdl"] Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.108068 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-catalog-content\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.108789 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj7hq\" (UniqueName: \"kubernetes.io/projected/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-kube-api-access-kj7hq\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.108833 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-utilities\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.111162 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62kjl"] Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.112778 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.117809 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.126932 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62kjl"] Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.210434 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj7hq\" (UniqueName: \"kubernetes.io/projected/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-kube-api-access-kj7hq\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.210542 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-utilities\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.210601 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6cjj\" (UniqueName: \"kubernetes.io/projected/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-kube-api-access-z6cjj\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.210635 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-catalog-content\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.210728 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-catalog-content\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.210767 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-utilities\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.211618 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-utilities\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.211701 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-catalog-content\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.234792 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj7hq\" (UniqueName: \"kubernetes.io/projected/15fb63a2-33a5-49fa-a5c6-854b8d5d3151-kube-api-access-kj7hq\") pod \"community-operators-lqbdl\" (UID: \"15fb63a2-33a5-49fa-a5c6-854b8d5d3151\") " pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.284181 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.311471 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6cjj\" (UniqueName: \"kubernetes.io/projected/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-kube-api-access-z6cjj\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.311551 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-catalog-content\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.311601 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-utilities\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.312201 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-utilities\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.312909 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-catalog-content\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.333899 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6cjj\" (UniqueName: \"kubernetes.io/projected/4f5c2904-119c-4f2a-b0f1-1efdea92c06a-kube-api-access-z6cjj\") pod \"certified-operators-62kjl\" (UID: \"4f5c2904-119c-4f2a-b0f1-1efdea92c06a\") " pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.428937 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.472982 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" event={"ID":"3eccf2e1-8ed2-4c86-aace-8bf67778fbbb","Type":"ContainerStarted","Data":"8e051805f5a573466c547e10a4271283e4c06dc29cb7321173ab3c98210db8a3"} Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.474185 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.497728 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" podStartSLOduration=2.497701308 podStartE2EDuration="2.497701308s" podCreationTimestamp="2026-02-19 00:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:20.497313093 +0000 UTC m=+366.461978084" watchObservedRunningTime="2026-02-19 00:12:20.497701308 +0000 UTC m=+366.462366299" Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.721044 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lqbdl"] Feb 19 00:12:20 crc kubenswrapper[4889]: W0219 00:12:20.733577 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15fb63a2_33a5_49fa_a5c6_854b8d5d3151.slice/crio-021e2d2cc1f3045c8023cd37d2a1cb7af1adde45f7fee9dce80153b6aac8a4e1 WatchSource:0}: Error finding container 021e2d2cc1f3045c8023cd37d2a1cb7af1adde45f7fee9dce80153b6aac8a4e1: Status 404 returned error can't find the container with id 021e2d2cc1f3045c8023cd37d2a1cb7af1adde45f7fee9dce80153b6aac8a4e1 Feb 19 00:12:20 crc kubenswrapper[4889]: I0219 00:12:20.937889 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62kjl"] Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.483100 4889 generic.go:334] "Generic (PLEG): container finished" podID="15fb63a2-33a5-49fa-a5c6-854b8d5d3151" containerID="27b3fe52d3f4392688408db61337ad2173bdbffc66fe5b59a42ddd6ffadbeb0d" exitCode=0 Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.483205 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqbdl" event={"ID":"15fb63a2-33a5-49fa-a5c6-854b8d5d3151","Type":"ContainerDied","Data":"27b3fe52d3f4392688408db61337ad2173bdbffc66fe5b59a42ddd6ffadbeb0d"} Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.483264 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqbdl" event={"ID":"15fb63a2-33a5-49fa-a5c6-854b8d5d3151","Type":"ContainerStarted","Data":"021e2d2cc1f3045c8023cd37d2a1cb7af1adde45f7fee9dce80153b6aac8a4e1"} Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.487480 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a80ba53-e06d-4dfc-b26c-6f251cb67d26" containerID="2dfcfcc7b4ef6f97d9c1912d24fac9cf98c815d65a0cb8ec711eb92cc5fb7c98" exitCode=0 Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.487548 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ccv" event={"ID":"6a80ba53-e06d-4dfc-b26c-6f251cb67d26","Type":"ContainerDied","Data":"2dfcfcc7b4ef6f97d9c1912d24fac9cf98c815d65a0cb8ec711eb92cc5fb7c98"} Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.494808 4889 generic.go:334] "Generic (PLEG): container finished" podID="19c688b4-e818-423b-8a30-79fc47b8671d" containerID="bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005" exitCode=0 Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.494987 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsvjd" event={"ID":"19c688b4-e818-423b-8a30-79fc47b8671d","Type":"ContainerDied","Data":"bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005"} Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.499485 4889 generic.go:334] "Generic (PLEG): container finished" podID="4f5c2904-119c-4f2a-b0f1-1efdea92c06a" containerID="64eab04412aaf6198def2aa07f56c94130ce91b799dde9e2ee518eceb0d38e9d" exitCode=0 Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.499551 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kjl" event={"ID":"4f5c2904-119c-4f2a-b0f1-1efdea92c06a","Type":"ContainerDied","Data":"64eab04412aaf6198def2aa07f56c94130ce91b799dde9e2ee518eceb0d38e9d"} Feb 19 00:12:21 crc kubenswrapper[4889]: I0219 00:12:21.499592 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kjl" event={"ID":"4f5c2904-119c-4f2a-b0f1-1efdea92c06a","Type":"ContainerStarted","Data":"8f97459d036c9a4a4a3480d7a2a8f1e45c90a2672d3158d4a3abed4d914a16f8"} Feb 19 00:12:22 crc kubenswrapper[4889]: I0219 00:12:22.509360 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqbdl" event={"ID":"15fb63a2-33a5-49fa-a5c6-854b8d5d3151","Type":"ContainerStarted","Data":"d696693d68601bb22f927b86508859c4e8547b04e1799c28a36391c8ef1068de"} Feb 19 00:12:22 crc kubenswrapper[4889]: I0219 00:12:22.512481 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94ccv" event={"ID":"6a80ba53-e06d-4dfc-b26c-6f251cb67d26","Type":"ContainerStarted","Data":"cdd8055f723dc478fe087b85546ad9b014a92efd1149dab0415bcc868331cd84"} Feb 19 00:12:22 crc kubenswrapper[4889]: I0219 00:12:22.515179 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsvjd" event={"ID":"19c688b4-e818-423b-8a30-79fc47b8671d","Type":"ContainerStarted","Data":"dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783"} Feb 19 00:12:22 crc kubenswrapper[4889]: I0219 00:12:22.517071 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kjl" event={"ID":"4f5c2904-119c-4f2a-b0f1-1efdea92c06a","Type":"ContainerStarted","Data":"671000006a2fbc4687ba6f01540516c06667cbe02e3422fd8c185afba9f1d6ce"} Feb 19 00:12:22 crc kubenswrapper[4889]: I0219 00:12:22.559745 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dsvjd" podStartSLOduration=3.113506392 podStartE2EDuration="5.559727914s" podCreationTimestamp="2026-02-19 00:12:17 +0000 UTC" firstStartedPulling="2026-02-19 00:12:19.455259755 +0000 UTC m=+365.419924746" lastFinishedPulling="2026-02-19 00:12:21.901481277 +0000 UTC m=+367.866146268" observedRunningTime="2026-02-19 00:12:22.557727688 +0000 UTC m=+368.522392689" watchObservedRunningTime="2026-02-19 00:12:22.559727914 +0000 UTC m=+368.524392905" Feb 19 00:12:22 crc kubenswrapper[4889]: I0219 00:12:22.601580 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94ccv" podStartSLOduration=3.104356984 podStartE2EDuration="5.601563063s" podCreationTimestamp="2026-02-19 00:12:17 +0000 UTC" firstStartedPulling="2026-02-19 00:12:19.449885741 +0000 UTC m=+365.414550732" lastFinishedPulling="2026-02-19 00:12:21.94709182 +0000 UTC m=+367.911756811" observedRunningTime="2026-02-19 00:12:22.579267566 +0000 UTC m=+368.543932607" watchObservedRunningTime="2026-02-19 00:12:22.601563063 +0000 UTC m=+368.566228054" Feb 19 00:12:23 crc kubenswrapper[4889]: I0219 00:12:23.527038 4889 generic.go:334] "Generic (PLEG): container finished" podID="4f5c2904-119c-4f2a-b0f1-1efdea92c06a" containerID="671000006a2fbc4687ba6f01540516c06667cbe02e3422fd8c185afba9f1d6ce" exitCode=0 Feb 19 00:12:23 crc kubenswrapper[4889]: I0219 00:12:23.527959 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kjl" event={"ID":"4f5c2904-119c-4f2a-b0f1-1efdea92c06a","Type":"ContainerDied","Data":"671000006a2fbc4687ba6f01540516c06667cbe02e3422fd8c185afba9f1d6ce"} Feb 19 00:12:23 crc kubenswrapper[4889]: I0219 00:12:23.529621 4889 generic.go:334] "Generic (PLEG): container finished" podID="15fb63a2-33a5-49fa-a5c6-854b8d5d3151" containerID="d696693d68601bb22f927b86508859c4e8547b04e1799c28a36391c8ef1068de" exitCode=0 Feb 19 00:12:23 crc kubenswrapper[4889]: I0219 00:12:23.529729 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqbdl" event={"ID":"15fb63a2-33a5-49fa-a5c6-854b8d5d3151","Type":"ContainerDied","Data":"d696693d68601bb22f927b86508859c4e8547b04e1799c28a36391c8ef1068de"} Feb 19 00:12:24 crc kubenswrapper[4889]: I0219 00:12:24.550256 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62kjl" event={"ID":"4f5c2904-119c-4f2a-b0f1-1efdea92c06a","Type":"ContainerStarted","Data":"b25fbef7ed027159d559f3c6dad7bc1d94909b79acbf44c694f7d939076cd34e"} Feb 19 00:12:24 crc kubenswrapper[4889]: I0219 00:12:24.555266 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lqbdl" event={"ID":"15fb63a2-33a5-49fa-a5c6-854b8d5d3151","Type":"ContainerStarted","Data":"aa435636469c9f0a4583ffa5efbd4c9303a32dea97022dafef00a59c92e3e470"} Feb 19 00:12:24 crc kubenswrapper[4889]: I0219 00:12:24.586841 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62kjl" podStartSLOduration=2.151863879 podStartE2EDuration="4.586809743s" podCreationTimestamp="2026-02-19 00:12:20 +0000 UTC" firstStartedPulling="2026-02-19 00:12:21.500987222 +0000 UTC m=+367.465652213" lastFinishedPulling="2026-02-19 00:12:23.935933086 +0000 UTC m=+369.900598077" observedRunningTime="2026-02-19 00:12:24.584922242 +0000 UTC m=+370.549587233" watchObservedRunningTime="2026-02-19 00:12:24.586809743 +0000 UTC m=+370.551474734" Feb 19 00:12:24 crc kubenswrapper[4889]: I0219 00:12:24.615407 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lqbdl" podStartSLOduration=3.094214079 podStartE2EDuration="5.615383699s" podCreationTimestamp="2026-02-19 00:12:19 +0000 UTC" firstStartedPulling="2026-02-19 00:12:21.485380519 +0000 UTC m=+367.450045510" lastFinishedPulling="2026-02-19 00:12:24.006550139 +0000 UTC m=+369.971215130" observedRunningTime="2026-02-19 00:12:24.61250942 +0000 UTC m=+370.577174441" watchObservedRunningTime="2026-02-19 00:12:24.615383699 +0000 UTC m=+370.580048690" Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.847361 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.848394 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.894461 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-9kpd9"] Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.894794 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" podUID="e353cd71-4a24-47b8-8cf7-83fae9f6df96" containerName="controller-manager" containerID="cri-o://7d994deaee0067ac1dc457d35178d5a5d9fc5ec30bfc0d857869a5def766baaf" gracePeriod=30 Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.917323 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq"] Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.917635 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" podUID="25845a04-fe5d-46cf-b166-aa983f8e6e4e" containerName="route-controller-manager" containerID="cri-o://37da213e39180a7a115670b6a20385c54e411360acd4aadcb3f6b161eb497e43" gracePeriod=30 Feb 19 00:12:27 crc kubenswrapper[4889]: I0219 00:12:27.937006 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:28 crc kubenswrapper[4889]: I0219 00:12:28.026444 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:28 crc kubenswrapper[4889]: I0219 00:12:28.026521 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:28 crc kubenswrapper[4889]: I0219 00:12:28.070946 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:28 crc kubenswrapper[4889]: I0219 00:12:28.582728 4889 generic.go:334] "Generic (PLEG): container finished" podID="25845a04-fe5d-46cf-b166-aa983f8e6e4e" containerID="37da213e39180a7a115670b6a20385c54e411360acd4aadcb3f6b161eb497e43" exitCode=0 Feb 19 00:12:28 crc kubenswrapper[4889]: I0219 00:12:28.582826 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" event={"ID":"25845a04-fe5d-46cf-b166-aa983f8e6e4e","Type":"ContainerDied","Data":"37da213e39180a7a115670b6a20385c54e411360acd4aadcb3f6b161eb497e43"} Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:28.585289 4889 generic.go:334] "Generic (PLEG): container finished" podID="e353cd71-4a24-47b8-8cf7-83fae9f6df96" containerID="7d994deaee0067ac1dc457d35178d5a5d9fc5ec30bfc0d857869a5def766baaf" exitCode=0 Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:28.585436 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" event={"ID":"e353cd71-4a24-47b8-8cf7-83fae9f6df96","Type":"ContainerDied","Data":"7d994deaee0067ac1dc457d35178d5a5d9fc5ec30bfc0d857869a5def766baaf"} Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:28.629377 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:28.631339 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94ccv" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.110198 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.199057 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2"] Feb 19 00:12:29 crc kubenswrapper[4889]: E0219 00:12:29.199543 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25845a04-fe5d-46cf-b166-aa983f8e6e4e" containerName="route-controller-manager" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.199561 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="25845a04-fe5d-46cf-b166-aa983f8e6e4e" containerName="route-controller-manager" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.199703 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="25845a04-fe5d-46cf-b166-aa983f8e6e4e" containerName="route-controller-manager" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.200303 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.216219 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2"] Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.272858 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmsch\" (UniqueName: \"kubernetes.io/projected/25845a04-fe5d-46cf-b166-aa983f8e6e4e-kube-api-access-fmsch\") pod \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.273095 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-client-ca\") pod \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.273131 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-config\") pod \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.273161 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25845a04-fe5d-46cf-b166-aa983f8e6e4e-serving-cert\") pod \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\" (UID: \"25845a04-fe5d-46cf-b166-aa983f8e6e4e\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.274335 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-config" (OuterVolumeSpecName: "config") pod "25845a04-fe5d-46cf-b166-aa983f8e6e4e" (UID: "25845a04-fe5d-46cf-b166-aa983f8e6e4e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.274359 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-client-ca" (OuterVolumeSpecName: "client-ca") pod "25845a04-fe5d-46cf-b166-aa983f8e6e4e" (UID: "25845a04-fe5d-46cf-b166-aa983f8e6e4e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.275018 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.275035 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25845a04-fe5d-46cf-b166-aa983f8e6e4e-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.281304 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25845a04-fe5d-46cf-b166-aa983f8e6e4e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "25845a04-fe5d-46cf-b166-aa983f8e6e4e" (UID: "25845a04-fe5d-46cf-b166-aa983f8e6e4e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.283616 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25845a04-fe5d-46cf-b166-aa983f8e6e4e-kube-api-access-fmsch" (OuterVolumeSpecName: "kube-api-access-fmsch") pod "25845a04-fe5d-46cf-b166-aa983f8e6e4e" (UID: "25845a04-fe5d-46cf-b166-aa983f8e6e4e"). InnerVolumeSpecName "kube-api-access-fmsch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.376962 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlvs\" (UniqueName: \"kubernetes.io/projected/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-kube-api-access-twlvs\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.377548 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-client-ca\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.377576 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-serving-cert\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.377605 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-config\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.377656 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25845a04-fe5d-46cf-b166-aa983f8e6e4e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.377778 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmsch\" (UniqueName: \"kubernetes.io/projected/25845a04-fe5d-46cf-b166-aa983f8e6e4e-kube-api-access-fmsch\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.479036 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-client-ca\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.479119 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-serving-cert\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.479158 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-config\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.479213 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlvs\" (UniqueName: \"kubernetes.io/projected/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-kube-api-access-twlvs\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.480320 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-client-ca\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.480697 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-config\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.498652 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-serving-cert\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.502873 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlvs\" (UniqueName: \"kubernetes.io/projected/b0f9d13f-3439-433b-8c36-88ff1d2a3c59-kube-api-access-twlvs\") pod \"route-controller-manager-7f5776b745-2pxg2\" (UID: \"b0f9d13f-3439-433b-8c36-88ff1d2a3c59\") " pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.525059 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.602614 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" event={"ID":"25845a04-fe5d-46cf-b166-aa983f8e6e4e","Type":"ContainerDied","Data":"2eee7c3b6df73f2dbebcb36f6c8ee2ccdbe87de97eb8abbf97c03275851a2497"} Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.602647 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.602701 4889 scope.go:117] "RemoveContainer" containerID="37da213e39180a7a115670b6a20385c54e411360acd4aadcb3f6b161eb497e43" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.659293 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq"] Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.665266 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.668245 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9d8c44798-s5fcq"] Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.793592 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-config\") pod \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.794160 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e353cd71-4a24-47b8-8cf7-83fae9f6df96-serving-cert\") pod \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.794184 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-proxy-ca-bundles\") pod \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.794255 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8n96\" (UniqueName: \"kubernetes.io/projected/e353cd71-4a24-47b8-8cf7-83fae9f6df96-kube-api-access-z8n96\") pod \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.794297 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-client-ca\") pod \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\" (UID: \"e353cd71-4a24-47b8-8cf7-83fae9f6df96\") " Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.795119 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-config" (OuterVolumeSpecName: "config") pod "e353cd71-4a24-47b8-8cf7-83fae9f6df96" (UID: "e353cd71-4a24-47b8-8cf7-83fae9f6df96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.795538 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-client-ca" (OuterVolumeSpecName: "client-ca") pod "e353cd71-4a24-47b8-8cf7-83fae9f6df96" (UID: "e353cd71-4a24-47b8-8cf7-83fae9f6df96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.795849 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e353cd71-4a24-47b8-8cf7-83fae9f6df96" (UID: "e353cd71-4a24-47b8-8cf7-83fae9f6df96"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.799330 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e353cd71-4a24-47b8-8cf7-83fae9f6df96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e353cd71-4a24-47b8-8cf7-83fae9f6df96" (UID: "e353cd71-4a24-47b8-8cf7-83fae9f6df96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.799361 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e353cd71-4a24-47b8-8cf7-83fae9f6df96-kube-api-access-z8n96" (OuterVolumeSpecName: "kube-api-access-z8n96") pod "e353cd71-4a24-47b8-8cf7-83fae9f6df96" (UID: "e353cd71-4a24-47b8-8cf7-83fae9f6df96"). InnerVolumeSpecName "kube-api-access-z8n96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.896792 4889 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.896842 4889 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e353cd71-4a24-47b8-8cf7-83fae9f6df96-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.896858 4889 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.896876 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8n96\" (UniqueName: \"kubernetes.io/projected/e353cd71-4a24-47b8-8cf7-83fae9f6df96-kube-api-access-z8n96\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:29 crc kubenswrapper[4889]: I0219 00:12:29.896888 4889 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e353cd71-4a24-47b8-8cf7-83fae9f6df96-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.106109 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2"] Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.284906 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.284972 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.347511 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.430394 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.430835 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.479348 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.614394 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" event={"ID":"b0f9d13f-3439-433b-8c36-88ff1d2a3c59","Type":"ContainerStarted","Data":"4361f384d7538d93640b3788725c9ac3b3ff74594e59b15c4df8b222631afc12"} Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.614468 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" event={"ID":"b0f9d13f-3439-433b-8c36-88ff1d2a3c59","Type":"ContainerStarted","Data":"ef5660b4c56767c792d81e59fa7ab04bb1c0f39748149c4c60e357d132780c66"} Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.618232 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" event={"ID":"e353cd71-4a24-47b8-8cf7-83fae9f6df96","Type":"ContainerDied","Data":"26a54f1c011cc42daa3d2d68554f07f5e40c1566a93624f4ec63bc32b464835a"} Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.618321 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75b4f6d956-9kpd9" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.618706 4889 scope.go:117] "RemoveContainer" containerID="7d994deaee0067ac1dc457d35178d5a5d9fc5ec30bfc0d857869a5def766baaf" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.649612 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" podStartSLOduration=2.649587469 podStartE2EDuration="2.649587469s" podCreationTimestamp="2026-02-19 00:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:30.648336852 +0000 UTC m=+376.613001853" watchObservedRunningTime="2026-02-19 00:12:30.649587469 +0000 UTC m=+376.614252460" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.665034 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lqbdl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.676340 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-9kpd9"] Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.680054 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75b4f6d956-9kpd9"] Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.694960 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62kjl" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.744270 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25845a04-fe5d-46cf-b166-aa983f8e6e4e" path="/var/lib/kubelet/pods/25845a04-fe5d-46cf-b166-aa983f8e6e4e/volumes" Feb 19 00:12:30 crc kubenswrapper[4889]: I0219 00:12:30.744851 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e353cd71-4a24-47b8-8cf7-83fae9f6df96" path="/var/lib/kubelet/pods/e353cd71-4a24-47b8-8cf7-83fae9f6df96/volumes" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.628202 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.634797 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f5776b745-2pxg2" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.961010 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-gh6dl"] Feb 19 00:12:31 crc kubenswrapper[4889]: E0219 00:12:31.961348 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e353cd71-4a24-47b8-8cf7-83fae9f6df96" containerName="controller-manager" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.961372 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="e353cd71-4a24-47b8-8cf7-83fae9f6df96" containerName="controller-manager" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.961505 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="e353cd71-4a24-47b8-8cf7-83fae9f6df96" containerName="controller-manager" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.962046 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.966249 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.966323 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.966757 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.966900 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.966939 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.969333 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.982805 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 00:12:31 crc kubenswrapper[4889]: I0219 00:12:31.987836 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-gh6dl"] Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.029602 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b542q\" (UniqueName: \"kubernetes.io/projected/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-kube-api-access-b542q\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.029663 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-config\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.029695 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-serving-cert\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.029721 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-proxy-ca-bundles\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.029761 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-client-ca\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.131065 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b542q\" (UniqueName: \"kubernetes.io/projected/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-kube-api-access-b542q\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.131132 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-config\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.131158 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-serving-cert\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.131181 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-proxy-ca-bundles\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.131286 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-client-ca\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.132885 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-client-ca\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.132955 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-proxy-ca-bundles\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.134671 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-config\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.141468 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-serving-cert\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.148579 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b542q\" (UniqueName: \"kubernetes.io/projected/8124c833-d5ab-4df1-9e6f-04e176b0eb1d-kube-api-access-b542q\") pod \"controller-manager-56f548c9c8-gh6dl\" (UID: \"8124c833-d5ab-4df1-9e6f-04e176b0eb1d\") " pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.294035 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:32 crc kubenswrapper[4889]: I0219 00:12:32.625483 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56f548c9c8-gh6dl"] Feb 19 00:12:33 crc kubenswrapper[4889]: I0219 00:12:33.641280 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" event={"ID":"8124c833-d5ab-4df1-9e6f-04e176b0eb1d","Type":"ContainerStarted","Data":"1d50060e1ee0b4f0a3e25782ae66109827f420f8cf49a7e22634aca8ffd7f90e"} Feb 19 00:12:33 crc kubenswrapper[4889]: I0219 00:12:33.641842 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" event={"ID":"8124c833-d5ab-4df1-9e6f-04e176b0eb1d","Type":"ContainerStarted","Data":"ac8ef2aa3982329580ad2dc94a4b40fe0eb6c18afa0b24f3834881dc864e8071"} Feb 19 00:12:33 crc kubenswrapper[4889]: I0219 00:12:33.663082 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" podStartSLOduration=6.663056321 podStartE2EDuration="6.663056321s" podCreationTimestamp="2026-02-19 00:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:33.658470368 +0000 UTC m=+379.623135359" watchObservedRunningTime="2026-02-19 00:12:33.663056321 +0000 UTC m=+379.627721312" Feb 19 00:12:34 crc kubenswrapper[4889]: I0219 00:12:34.647476 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:34 crc kubenswrapper[4889]: I0219 00:12:34.656051 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56f548c9c8-gh6dl" Feb 19 00:12:37 crc kubenswrapper[4889]: I0219 00:12:37.781814 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:12:37 crc kubenswrapper[4889]: I0219 00:12:37.782387 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:12:38 crc kubenswrapper[4889]: I0219 00:12:38.502337 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-c78tp" Feb 19 00:12:38 crc kubenswrapper[4889]: I0219 00:12:38.569269 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9b8dc"] Feb 19 00:13:03 crc kubenswrapper[4889]: I0219 00:13:03.611614 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" podUID="585fe6a3-bec2-42fb-bc1c-75203481f19a" containerName="registry" containerID="cri-o://278a736491d8ac064dce76afd169a4e0d60c517d01c3d55cc07475b657525d49" gracePeriod=30 Feb 19 00:13:03 crc kubenswrapper[4889]: I0219 00:13:03.858638 4889 generic.go:334] "Generic (PLEG): container finished" podID="585fe6a3-bec2-42fb-bc1c-75203481f19a" containerID="278a736491d8ac064dce76afd169a4e0d60c517d01c3d55cc07475b657525d49" exitCode=0 Feb 19 00:13:03 crc kubenswrapper[4889]: I0219 00:13:03.858699 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" event={"ID":"585fe6a3-bec2-42fb-bc1c-75203481f19a","Type":"ContainerDied","Data":"278a736491d8ac064dce76afd169a4e0d60c517d01c3d55cc07475b657525d49"} Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.132248 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.238442 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-tls\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.238591 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-certificates\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.238665 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-trusted-ca\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.238729 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/585fe6a3-bec2-42fb-bc1c-75203481f19a-installation-pull-secrets\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.238755 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-bound-sa-token\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.238794 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkdsn\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-kube-api-access-bkdsn\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.239102 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.239156 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/585fe6a3-bec2-42fb-bc1c-75203481f19a-ca-trust-extracted\") pod \"585fe6a3-bec2-42fb-bc1c-75203481f19a\" (UID: \"585fe6a3-bec2-42fb-bc1c-75203481f19a\") " Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.239840 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.240839 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.260965 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585fe6a3-bec2-42fb-bc1c-75203481f19a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.264697 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-kube-api-access-bkdsn" (OuterVolumeSpecName: "kube-api-access-bkdsn") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "kube-api-access-bkdsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.265040 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.265746 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/585fe6a3-bec2-42fb-bc1c-75203481f19a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.266135 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.266656 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "585fe6a3-bec2-42fb-bc1c-75203481f19a" (UID: "585fe6a3-bec2-42fb-bc1c-75203481f19a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341534 4889 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/585fe6a3-bec2-42fb-bc1c-75203481f19a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341597 4889 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341608 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkdsn\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-kube-api-access-bkdsn\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341617 4889 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/585fe6a3-bec2-42fb-bc1c-75203481f19a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341628 4889 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341637 4889 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.341650 4889 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/585fe6a3-bec2-42fb-bc1c-75203481f19a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.868887 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" event={"ID":"585fe6a3-bec2-42fb-bc1c-75203481f19a","Type":"ContainerDied","Data":"de6ce268f50ee5382839c8c1b5a2e09fb4b8b741498f7c9a0ce191c3f4e03c85"} Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.868976 4889 scope.go:117] "RemoveContainer" containerID="278a736491d8ac064dce76afd169a4e0d60c517d01c3d55cc07475b657525d49" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.869017 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9b8dc" Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.891683 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9b8dc"] Feb 19 00:13:04 crc kubenswrapper[4889]: I0219 00:13:04.896568 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9b8dc"] Feb 19 00:13:06 crc kubenswrapper[4889]: I0219 00:13:06.731872 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585fe6a3-bec2-42fb-bc1c-75203481f19a" path="/var/lib/kubelet/pods/585fe6a3-bec2-42fb-bc1c-75203481f19a/volumes" Feb 19 00:13:07 crc kubenswrapper[4889]: I0219 00:13:07.781827 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:07 crc kubenswrapper[4889]: I0219 00:13:07.782408 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:07 crc kubenswrapper[4889]: I0219 00:13:07.782484 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:13:07 crc kubenswrapper[4889]: I0219 00:13:07.783437 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2231eeacede3ecd2782301c9a109e45033b30fb6e34ee69f5b69e52a119b5056"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:13:07 crc kubenswrapper[4889]: I0219 00:13:07.783515 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://2231eeacede3ecd2782301c9a109e45033b30fb6e34ee69f5b69e52a119b5056" gracePeriod=600 Feb 19 00:13:08 crc kubenswrapper[4889]: I0219 00:13:08.902688 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="2231eeacede3ecd2782301c9a109e45033b30fb6e34ee69f5b69e52a119b5056" exitCode=0 Feb 19 00:13:08 crc kubenswrapper[4889]: I0219 00:13:08.902756 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"2231eeacede3ecd2782301c9a109e45033b30fb6e34ee69f5b69e52a119b5056"} Feb 19 00:13:08 crc kubenswrapper[4889]: I0219 00:13:08.903133 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"5344f45dff8eecfe6b1adf4750c6d3e2ab8740bd4015db965625ea2ee833f1a6"} Feb 19 00:13:08 crc kubenswrapper[4889]: I0219 00:13:08.903164 4889 scope.go:117] "RemoveContainer" containerID="2e39be38eaa324d1ad4f1fe4fc2b40952c59802c4b5db6b3f305d46fa9051f5f" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.177669 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb"] Feb 19 00:15:00 crc kubenswrapper[4889]: E0219 00:15:00.178797 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585fe6a3-bec2-42fb-bc1c-75203481f19a" containerName="registry" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.178818 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="585fe6a3-bec2-42fb-bc1c-75203481f19a" containerName="registry" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.178978 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="585fe6a3-bec2-42fb-bc1c-75203481f19a" containerName="registry" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.179668 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.183775 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.184000 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.197039 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb"] Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.348022 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0384f8a4-abe3-47ed-bc62-ebc2e734b599-config-volume\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.348146 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0384f8a4-abe3-47ed-bc62-ebc2e734b599-secret-volume\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.348232 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-594xz\" (UniqueName: \"kubernetes.io/projected/0384f8a4-abe3-47ed-bc62-ebc2e734b599-kube-api-access-594xz\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.449829 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-594xz\" (UniqueName: \"kubernetes.io/projected/0384f8a4-abe3-47ed-bc62-ebc2e734b599-kube-api-access-594xz\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.449912 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0384f8a4-abe3-47ed-bc62-ebc2e734b599-config-volume\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.449964 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0384f8a4-abe3-47ed-bc62-ebc2e734b599-secret-volume\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.452263 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0384f8a4-abe3-47ed-bc62-ebc2e734b599-config-volume\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.461226 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0384f8a4-abe3-47ed-bc62-ebc2e734b599-secret-volume\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.471418 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-594xz\" (UniqueName: \"kubernetes.io/projected/0384f8a4-abe3-47ed-bc62-ebc2e734b599-kube-api-access-594xz\") pod \"collect-profiles-29524335-zk6hb\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.501047 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:00 crc kubenswrapper[4889]: I0219 00:15:00.705089 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb"] Feb 19 00:15:00 crc kubenswrapper[4889]: W0219 00:15:00.713597 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0384f8a4_abe3_47ed_bc62_ebc2e734b599.slice/crio-93ad06b150a405ef2bd62d3dfb5d88843ceb855f5d55a82e902881ee20988698 WatchSource:0}: Error finding container 93ad06b150a405ef2bd62d3dfb5d88843ceb855f5d55a82e902881ee20988698: Status 404 returned error can't find the container with id 93ad06b150a405ef2bd62d3dfb5d88843ceb855f5d55a82e902881ee20988698 Feb 19 00:15:01 crc kubenswrapper[4889]: I0219 00:15:01.578975 4889 generic.go:334] "Generic (PLEG): container finished" podID="0384f8a4-abe3-47ed-bc62-ebc2e734b599" containerID="e098ac74e3aff9250c319ae2f18a9fde2d4ca0014dbb05b34a71eb58c09ed10f" exitCode=0 Feb 19 00:15:01 crc kubenswrapper[4889]: I0219 00:15:01.579063 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" event={"ID":"0384f8a4-abe3-47ed-bc62-ebc2e734b599","Type":"ContainerDied","Data":"e098ac74e3aff9250c319ae2f18a9fde2d4ca0014dbb05b34a71eb58c09ed10f"} Feb 19 00:15:01 crc kubenswrapper[4889]: I0219 00:15:01.579560 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" event={"ID":"0384f8a4-abe3-47ed-bc62-ebc2e734b599","Type":"ContainerStarted","Data":"93ad06b150a405ef2bd62d3dfb5d88843ceb855f5d55a82e902881ee20988698"} Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.804236 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.987009 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0384f8a4-abe3-47ed-bc62-ebc2e734b599-secret-volume\") pod \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.988043 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0384f8a4-abe3-47ed-bc62-ebc2e734b599-config-volume\") pod \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.988643 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-594xz\" (UniqueName: \"kubernetes.io/projected/0384f8a4-abe3-47ed-bc62-ebc2e734b599-kube-api-access-594xz\") pod \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\" (UID: \"0384f8a4-abe3-47ed-bc62-ebc2e734b599\") " Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.990058 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0384f8a4-abe3-47ed-bc62-ebc2e734b599-config-volume" (OuterVolumeSpecName: "config-volume") pod "0384f8a4-abe3-47ed-bc62-ebc2e734b599" (UID: "0384f8a4-abe3-47ed-bc62-ebc2e734b599"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.994283 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0384f8a4-abe3-47ed-bc62-ebc2e734b599-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0384f8a4-abe3-47ed-bc62-ebc2e734b599" (UID: "0384f8a4-abe3-47ed-bc62-ebc2e734b599"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:15:02 crc kubenswrapper[4889]: I0219 00:15:02.994517 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0384f8a4-abe3-47ed-bc62-ebc2e734b599-kube-api-access-594xz" (OuterVolumeSpecName: "kube-api-access-594xz") pod "0384f8a4-abe3-47ed-bc62-ebc2e734b599" (UID: "0384f8a4-abe3-47ed-bc62-ebc2e734b599"). InnerVolumeSpecName "kube-api-access-594xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:15:03 crc kubenswrapper[4889]: I0219 00:15:03.091257 4889 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0384f8a4-abe3-47ed-bc62-ebc2e734b599-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[4889]: I0219 00:15:03.091313 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0384f8a4-abe3-47ed-bc62-ebc2e734b599-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[4889]: I0219 00:15:03.091326 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-594xz\" (UniqueName: \"kubernetes.io/projected/0384f8a4-abe3-47ed-bc62-ebc2e734b599-kube-api-access-594xz\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[4889]: I0219 00:15:03.594098 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" event={"ID":"0384f8a4-abe3-47ed-bc62-ebc2e734b599","Type":"ContainerDied","Data":"93ad06b150a405ef2bd62d3dfb5d88843ceb855f5d55a82e902881ee20988698"} Feb 19 00:15:03 crc kubenswrapper[4889]: I0219 00:15:03.594571 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93ad06b150a405ef2bd62d3dfb5d88843ceb855f5d55a82e902881ee20988698" Feb 19 00:15:03 crc kubenswrapper[4889]: I0219 00:15:03.594184 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-zk6hb" Feb 19 00:15:37 crc kubenswrapper[4889]: I0219 00:15:37.781873 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:15:37 crc kubenswrapper[4889]: I0219 00:15:37.782863 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.976160 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4nwjd"] Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.977570 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-controller" containerID="cri-o://c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" gracePeriod=30 Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.977908 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="sbdb" containerID="cri-o://96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" gracePeriod=30 Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.977973 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="northd" containerID="cri-o://bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" gracePeriod=30 Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.978337 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="nbdb" containerID="cri-o://7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" gracePeriod=30 Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.978400 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-node" containerID="cri-o://55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" gracePeriod=30 Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.978470 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" gracePeriod=30 Feb 19 00:15:49 crc kubenswrapper[4889]: I0219 00:15:49.978491 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-acl-logging" containerID="cri-o://cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" gracePeriod=30 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.008490 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" containerID="cri-o://a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" gracePeriod=30 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.327827 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/3.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.331429 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovn-acl-logging/0.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.332161 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovn-controller/0.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.332847 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.395355 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-g9v74"] Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.396019 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.396125 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.396212 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kubecfg-setup" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.396319 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kubecfg-setup" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.396406 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.396494 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.396576 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0384f8a4-abe3-47ed-bc62-ebc2e734b599" containerName="collect-profiles" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.396652 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0384f8a4-abe3-47ed-bc62-ebc2e734b599" containerName="collect-profiles" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.396734 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="nbdb" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.396836 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="nbdb" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.396919 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-node" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.396998 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-node" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397063 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397122 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397179 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397267 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397357 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397439 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397520 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="northd" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397585 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="northd" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397640 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="sbdb" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397691 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="sbdb" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397743 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-acl-logging" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397792 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-acl-logging" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.397851 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.397901 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398050 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398110 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="sbdb" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398159 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398234 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398307 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398359 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398412 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="northd" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398464 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="kube-rbac-proxy-node" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398517 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398568 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovn-acl-logging" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398622 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0384f8a4-abe3-47ed-bc62-ebc2e734b599" containerName="collect-profiles" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398674 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="nbdb" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.398841 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.398893 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.399059 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="707d1219-7187-4fda-b155-e6d64687b190" containerName="ovnkube-controller" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.418623 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434036 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-etc-openvswitch\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434111 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-kubelet\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434141 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-env-overrides\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434167 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-log-socket\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434293 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-var-lib-openvswitch\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434322 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-node-log\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434345 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lssf6\" (UniqueName: \"kubernetes.io/projected/707d1219-7187-4fda-b155-e6d64687b190-kube-api-access-lssf6\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434373 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-netns\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434446 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-ovn\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434465 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-openvswitch\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434495 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-ovn-kubernetes\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434515 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-netd\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434557 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-bin\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434577 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-systemd\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434602 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-var-lib-cni-networks-ovn-kubernetes\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434638 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-script-lib\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434673 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/707d1219-7187-4fda-b155-e6d64687b190-ovn-node-metrics-cert\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434709 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-config\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434725 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-slash\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434741 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-systemd-units\") pod \"707d1219-7187-4fda-b155-e6d64687b190\" (UID: \"707d1219-7187-4fda-b155-e6d64687b190\") " Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434778 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434893 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.434934 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435054 4889 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435084 4889 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435099 4889 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435156 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435193 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435239 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435269 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435293 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435358 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435464 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-log-socket" (OuterVolumeSpecName: "log-socket") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435530 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435574 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-node-log" (OuterVolumeSpecName: "node-log") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435751 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-slash" (OuterVolumeSpecName: "host-slash") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435861 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.435886 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.436578 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.436624 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.443822 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/707d1219-7187-4fda-b155-e6d64687b190-kube-api-access-lssf6" (OuterVolumeSpecName: "kube-api-access-lssf6") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "kube-api-access-lssf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.451444 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/707d1219-7187-4fda-b155-e6d64687b190-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.463162 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "707d1219-7187-4fda-b155-e6d64687b190" (UID: "707d1219-7187-4fda-b155-e6d64687b190"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537191 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-cni-bin\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537309 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-systemd\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537344 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-node-log\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537380 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-log-socket\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537490 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-env-overrides\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537540 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-ovn\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537573 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-cni-netd\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537684 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-run-netns\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537712 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-var-lib-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537737 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-kubelet\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537765 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537786 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-etc-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537817 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-run-ovn-kubernetes\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537844 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-ovnkube-config\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537863 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537887 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-slash\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537905 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-systemd-units\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537923 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvnt\" (UniqueName: \"kubernetes.io/projected/131d9ea5-61f8-4485-9da5-31164f72a667-kube-api-access-vmvnt\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537946 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/131d9ea5-61f8-4485-9da5-31164f72a667-ovn-node-metrics-cert\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.537969 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-ovnkube-script-lib\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538028 4889 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538041 4889 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538053 4889 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538063 4889 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538074 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lssf6\" (UniqueName: \"kubernetes.io/projected/707d1219-7187-4fda-b155-e6d64687b190-kube-api-access-lssf6\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538089 4889 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538102 4889 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538115 4889 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538127 4889 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538141 4889 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538153 4889 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538165 4889 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538176 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538186 4889 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/707d1219-7187-4fda-b155-e6d64687b190-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538345 4889 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/707d1219-7187-4fda-b155-e6d64687b190-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538503 4889 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.538541 4889 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/707d1219-7187-4fda-b155-e6d64687b190-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640302 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-cni-bin\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640361 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-systemd\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640392 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-node-log\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640415 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-env-overrides\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640434 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-log-socket\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640462 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-ovn\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640480 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-cni-netd\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640511 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-run-netns\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640528 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-var-lib-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640542 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-systemd\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640598 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-run-netns\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640559 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-kubelet\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640639 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-var-lib-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640646 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-log-socket\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640542 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-node-log\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640606 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-kubelet\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640697 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-ovn\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640791 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-etc-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640831 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-etc-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640621 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-cni-netd\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640871 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640951 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-run-ovn-kubernetes\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.640975 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-run-openvswitch\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641039 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641039 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-run-ovn-kubernetes\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641069 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-ovnkube-config\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641148 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641173 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-slash\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641209 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-systemd-units\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641256 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvnt\" (UniqueName: \"kubernetes.io/projected/131d9ea5-61f8-4485-9da5-31164f72a667-kube-api-access-vmvnt\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641273 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-slash\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641287 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/131d9ea5-61f8-4485-9da5-31164f72a667-ovn-node-metrics-cert\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641310 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-systemd-units\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641343 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-ovnkube-script-lib\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641471 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-env-overrides\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.641541 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/131d9ea5-61f8-4485-9da5-31164f72a667-host-cni-bin\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.642048 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-ovnkube-config\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.642321 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/131d9ea5-61f8-4485-9da5-31164f72a667-ovnkube-script-lib\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.645685 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/131d9ea5-61f8-4485-9da5-31164f72a667-ovn-node-metrics-cert\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.659503 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvnt\" (UniqueName: \"kubernetes.io/projected/131d9ea5-61f8-4485-9da5-31164f72a667-kube-api-access-vmvnt\") pod \"ovnkube-node-g9v74\" (UID: \"131d9ea5-61f8-4485-9da5-31164f72a667\") " pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.738621 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.901302 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/2.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.902045 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/1.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.902113 4889 generic.go:334] "Generic (PLEG): container finished" podID="7dcfc583-b6f2-415a-a4f0-adb70f4865c8" containerID="e6a5d8583a907ae88f80f4668dcf9c28a83c7fa6c71e1f663c6d1a05d56580cd" exitCode=2 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.902192 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerDied","Data":"e6a5d8583a907ae88f80f4668dcf9c28a83c7fa6c71e1f663c6d1a05d56580cd"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.902276 4889 scope.go:117] "RemoveContainer" containerID="8ac632c9323858b16fd3babcd9fb5ab5e8c17f3b459878dbb13def6beddcb9c3" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.903136 4889 scope.go:117] "RemoveContainer" containerID="e6a5d8583a907ae88f80f4668dcf9c28a83c7fa6c71e1f663c6d1a05d56580cd" Feb 19 00:15:50 crc kubenswrapper[4889]: E0219 00:15:50.903704 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qmhk6_openshift-multus(7dcfc583-b6f2-415a-a4f0-adb70f4865c8)\"" pod="openshift-multus/multus-qmhk6" podUID="7dcfc583-b6f2-415a-a4f0-adb70f4865c8" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.906149 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovnkube-controller/3.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.909681 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovn-acl-logging/0.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.911592 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-4nwjd_707d1219-7187-4fda-b155-e6d64687b190/ovn-controller/0.log" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912427 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912452 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912462 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912470 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912478 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912488 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912495 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" exitCode=143 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912503 4889 generic.go:334] "Generic (PLEG): container finished" podID="707d1219-7187-4fda-b155-e6d64687b190" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" exitCode=143 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912531 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912557 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912602 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912621 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912635 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912648 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912662 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912677 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912692 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912699 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912707 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912714 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912722 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912729 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912736 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912744 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912753 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912764 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912776 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912786 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912795 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912804 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912812 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912821 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912829 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912836 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912843 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912850 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912860 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912871 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912882 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912888 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912895 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912901 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912908 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912916 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912923 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912930 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912937 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912946 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4nwjd" event={"ID":"707d1219-7187-4fda-b155-e6d64687b190","Type":"ContainerDied","Data":"e81efd83b50afae8905567e1aca0dbdca2a1b40443de06e497480eb4746ac0b5"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912956 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912964 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912971 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912978 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912985 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912991 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.912999 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.913005 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.913012 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.913021 4889 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.915512 4889 generic.go:334] "Generic (PLEG): container finished" podID="131d9ea5-61f8-4485-9da5-31164f72a667" containerID="033fa28d896cf86a4908739d31cb13810e67637f2b7841e54f391658fc227c1c" exitCode=0 Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.915549 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerDied","Data":"033fa28d896cf86a4908739d31cb13810e67637f2b7841e54f391658fc227c1c"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.915670 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"2971f42e9c6c8daff8a685891838cf0f82531a5476d5602262cb86a48ffd4441"} Feb 19 00:15:50 crc kubenswrapper[4889]: I0219 00:15:50.982720 4889 scope.go:117] "RemoveContainer" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.010024 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4nwjd"] Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.014311 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.014548 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4nwjd"] Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.039060 4889 scope.go:117] "RemoveContainer" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.053864 4889 scope.go:117] "RemoveContainer" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.065197 4889 scope.go:117] "RemoveContainer" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.076726 4889 scope.go:117] "RemoveContainer" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.091116 4889 scope.go:117] "RemoveContainer" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.104999 4889 scope.go:117] "RemoveContainer" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.139725 4889 scope.go:117] "RemoveContainer" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.162635 4889 scope.go:117] "RemoveContainer" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.179402 4889 scope.go:117] "RemoveContainer" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.180051 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": container with ID starting with a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70 not found: ID does not exist" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.180125 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} err="failed to get container status \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": rpc error: code = NotFound desc = could not find container \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": container with ID starting with a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.180177 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.180705 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": container with ID starting with 54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e not found: ID does not exist" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.180804 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} err="failed to get container status \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": rpc error: code = NotFound desc = could not find container \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": container with ID starting with 54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.180928 4889 scope.go:117] "RemoveContainer" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.181574 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": container with ID starting with 96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b not found: ID does not exist" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.181668 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} err="failed to get container status \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": rpc error: code = NotFound desc = could not find container \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": container with ID starting with 96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.181748 4889 scope.go:117] "RemoveContainer" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.182151 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": container with ID starting with 7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5 not found: ID does not exist" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.182179 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} err="failed to get container status \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": rpc error: code = NotFound desc = could not find container \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": container with ID starting with 7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.182196 4889 scope.go:117] "RemoveContainer" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.182597 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": container with ID starting with bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea not found: ID does not exist" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.182696 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} err="failed to get container status \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": rpc error: code = NotFound desc = could not find container \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": container with ID starting with bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.182777 4889 scope.go:117] "RemoveContainer" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.183208 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": container with ID starting with a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34 not found: ID does not exist" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.183279 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} err="failed to get container status \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": rpc error: code = NotFound desc = could not find container \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": container with ID starting with a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.183323 4889 scope.go:117] "RemoveContainer" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.183735 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": container with ID starting with 55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba not found: ID does not exist" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.183826 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} err="failed to get container status \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": rpc error: code = NotFound desc = could not find container \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": container with ID starting with 55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.183892 4889 scope.go:117] "RemoveContainer" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.184298 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": container with ID starting with cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae not found: ID does not exist" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.184399 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} err="failed to get container status \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": rpc error: code = NotFound desc = could not find container \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": container with ID starting with cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.184590 4889 scope.go:117] "RemoveContainer" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.185187 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": container with ID starting with c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba not found: ID does not exist" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.185367 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} err="failed to get container status \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": rpc error: code = NotFound desc = could not find container \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": container with ID starting with c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.185434 4889 scope.go:117] "RemoveContainer" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" Feb 19 00:15:51 crc kubenswrapper[4889]: E0219 00:15:51.185719 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": container with ID starting with a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b not found: ID does not exist" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.185746 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} err="failed to get container status \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": rpc error: code = NotFound desc = could not find container \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": container with ID starting with a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.185766 4889 scope.go:117] "RemoveContainer" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.185971 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} err="failed to get container status \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": rpc error: code = NotFound desc = could not find container \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": container with ID starting with a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.185995 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.186194 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} err="failed to get container status \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": rpc error: code = NotFound desc = could not find container \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": container with ID starting with 54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.186287 4889 scope.go:117] "RemoveContainer" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.186942 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} err="failed to get container status \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": rpc error: code = NotFound desc = could not find container \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": container with ID starting with 96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.186965 4889 scope.go:117] "RemoveContainer" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.187353 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} err="failed to get container status \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": rpc error: code = NotFound desc = could not find container \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": container with ID starting with 7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.187416 4889 scope.go:117] "RemoveContainer" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.187705 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} err="failed to get container status \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": rpc error: code = NotFound desc = could not find container \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": container with ID starting with bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.187731 4889 scope.go:117] "RemoveContainer" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.188136 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} err="failed to get container status \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": rpc error: code = NotFound desc = could not find container \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": container with ID starting with a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.188155 4889 scope.go:117] "RemoveContainer" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.188791 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} err="failed to get container status \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": rpc error: code = NotFound desc = could not find container \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": container with ID starting with 55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.188814 4889 scope.go:117] "RemoveContainer" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189095 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} err="failed to get container status \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": rpc error: code = NotFound desc = could not find container \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": container with ID starting with cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189121 4889 scope.go:117] "RemoveContainer" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189401 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} err="failed to get container status \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": rpc error: code = NotFound desc = could not find container \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": container with ID starting with c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189423 4889 scope.go:117] "RemoveContainer" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189662 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} err="failed to get container status \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": rpc error: code = NotFound desc = could not find container \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": container with ID starting with a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189686 4889 scope.go:117] "RemoveContainer" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189936 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} err="failed to get container status \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": rpc error: code = NotFound desc = could not find container \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": container with ID starting with a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.189961 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.190197 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} err="failed to get container status \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": rpc error: code = NotFound desc = could not find container \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": container with ID starting with 54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.190234 4889 scope.go:117] "RemoveContainer" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.190611 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} err="failed to get container status \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": rpc error: code = NotFound desc = could not find container \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": container with ID starting with 96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.190635 4889 scope.go:117] "RemoveContainer" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.190859 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} err="failed to get container status \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": rpc error: code = NotFound desc = could not find container \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": container with ID starting with 7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.190882 4889 scope.go:117] "RemoveContainer" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.191322 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} err="failed to get container status \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": rpc error: code = NotFound desc = could not find container \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": container with ID starting with bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.191346 4889 scope.go:117] "RemoveContainer" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.191634 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} err="failed to get container status \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": rpc error: code = NotFound desc = could not find container \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": container with ID starting with a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.191660 4889 scope.go:117] "RemoveContainer" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.192032 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} err="failed to get container status \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": rpc error: code = NotFound desc = could not find container \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": container with ID starting with 55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.192072 4889 scope.go:117] "RemoveContainer" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.192435 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} err="failed to get container status \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": rpc error: code = NotFound desc = could not find container \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": container with ID starting with cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.192464 4889 scope.go:117] "RemoveContainer" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.192773 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} err="failed to get container status \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": rpc error: code = NotFound desc = could not find container \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": container with ID starting with c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.192798 4889 scope.go:117] "RemoveContainer" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193107 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} err="failed to get container status \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": rpc error: code = NotFound desc = could not find container \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": container with ID starting with a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193131 4889 scope.go:117] "RemoveContainer" containerID="a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193451 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70"} err="failed to get container status \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": rpc error: code = NotFound desc = could not find container \"a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70\": container with ID starting with a945d586286113a11c759cf6d0b4eef9e5d090c1672e52ee863130f50c8a8a70 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193472 4889 scope.go:117] "RemoveContainer" containerID="54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193669 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e"} err="failed to get container status \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": rpc error: code = NotFound desc = could not find container \"54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e\": container with ID starting with 54d70509ab9d270660e84f9e8f0db638beb8e66160c03919c717d1298fa7d79e not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193688 4889 scope.go:117] "RemoveContainer" containerID="96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193938 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b"} err="failed to get container status \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": rpc error: code = NotFound desc = could not find container \"96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b\": container with ID starting with 96f5ad7d8d5a502e70a11e99cd842de47afd8db2a899629f358dbfdbaecc0e2b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.193958 4889 scope.go:117] "RemoveContainer" containerID="7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.194212 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5"} err="failed to get container status \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": rpc error: code = NotFound desc = could not find container \"7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5\": container with ID starting with 7076b864fab8dc34e5183a04f010e9bc008a9766379fd8b35a1c36dc2c9f29a5 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.194282 4889 scope.go:117] "RemoveContainer" containerID="bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.194525 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea"} err="failed to get container status \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": rpc error: code = NotFound desc = could not find container \"bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea\": container with ID starting with bd4ca96259d14b784fc4f5d8ca4e2bc1729a560a66a5953e87d92b81f46be5ea not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.194545 4889 scope.go:117] "RemoveContainer" containerID="a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.194774 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34"} err="failed to get container status \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": rpc error: code = NotFound desc = could not find container \"a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34\": container with ID starting with a1e5e6580bcd295b2f795d42cc6d17913b44555e4b4a2233e98cf300ae346d34 not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.194793 4889 scope.go:117] "RemoveContainer" containerID="55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.195321 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba"} err="failed to get container status \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": rpc error: code = NotFound desc = could not find container \"55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba\": container with ID starting with 55877036eb26b85065c652ef1143920d0c849365a5911c0f01c74cc44c5c66ba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.195350 4889 scope.go:117] "RemoveContainer" containerID="cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.195696 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae"} err="failed to get container status \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": rpc error: code = NotFound desc = could not find container \"cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae\": container with ID starting with cc4eb64573040818386c9982b806d279f6908e4f06dff0e66bc8cc3d98ecccae not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.195717 4889 scope.go:117] "RemoveContainer" containerID="c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.196032 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba"} err="failed to get container status \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": rpc error: code = NotFound desc = could not find container \"c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba\": container with ID starting with c7205b0753ab007414a449a0cf3f08e648bea44f12c38bb3d3eb32f216a67dba not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.196053 4889 scope.go:117] "RemoveContainer" containerID="a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.196423 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b"} err="failed to get container status \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": rpc error: code = NotFound desc = could not find container \"a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b\": container with ID starting with a145a8b5eafa408db51a56646340770c9e6459933d4a8606fefd2becb8e3fd1b not found: ID does not exist" Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.929600 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"0e00620174e5ef94e52d3dce853e3a47be574cf50b976aefd939ebb50ef1f71d"} Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.930122 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"73869f57b397436e43e0e26cd0554b18cd6de818aaf8ec0bd691bf9dc66a833d"} Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.930136 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"fcb4764a4762c926988124546a1df7d8462bcdaba64c8d714ab58dba8698a633"} Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.930147 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"93dbfe1a65f05d13285b2abcba734f21740e4df5aac8746f38e0cd20a5555316"} Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.930157 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"20b82a6260ed4f2852b4e21ae8e97e85d0ce3a4138ddda44b42d121b64e1f2fe"} Feb 19 00:15:51 crc kubenswrapper[4889]: I0219 00:15:51.932829 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/2.log" Feb 19 00:15:52 crc kubenswrapper[4889]: I0219 00:15:52.734543 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="707d1219-7187-4fda-b155-e6d64687b190" path="/var/lib/kubelet/pods/707d1219-7187-4fda-b155-e6d64687b190/volumes" Feb 19 00:15:52 crc kubenswrapper[4889]: I0219 00:15:52.953075 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"90eafa4f0840e965e31e5e8beb4795b80ea7d57c91fd5aa32bca7bd7c154aea2"} Feb 19 00:15:54 crc kubenswrapper[4889]: I0219 00:15:54.978565 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"d5a4ed43e18bad78900c96367e566d1ca7315c4164daefd3e28dc227e4a88d00"} Feb 19 00:15:56 crc kubenswrapper[4889]: I0219 00:15:56.996420 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" event={"ID":"131d9ea5-61f8-4485-9da5-31164f72a667","Type":"ContainerStarted","Data":"36b56e3a55f53ceea6be6a72555172eb3a4e06276f1da9825e0877e208cfcf5a"} Feb 19 00:15:56 crc kubenswrapper[4889]: I0219 00:15:56.997265 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:56 crc kubenswrapper[4889]: I0219 00:15:56.997282 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:56 crc kubenswrapper[4889]: I0219 00:15:56.997292 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:57 crc kubenswrapper[4889]: I0219 00:15:57.030616 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" podStartSLOduration=7.030586438 podStartE2EDuration="7.030586438s" podCreationTimestamp="2026-02-19 00:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:15:57.024292986 +0000 UTC m=+582.988957977" watchObservedRunningTime="2026-02-19 00:15:57.030586438 +0000 UTC m=+582.995251429" Feb 19 00:15:57 crc kubenswrapper[4889]: I0219 00:15:57.034427 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:15:57 crc kubenswrapper[4889]: I0219 00:15:57.037485 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:16:02 crc kubenswrapper[4889]: I0219 00:16:02.725890 4889 scope.go:117] "RemoveContainer" containerID="e6a5d8583a907ae88f80f4668dcf9c28a83c7fa6c71e1f663c6d1a05d56580cd" Feb 19 00:16:02 crc kubenswrapper[4889]: E0219 00:16:02.726847 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-qmhk6_openshift-multus(7dcfc583-b6f2-415a-a4f0-adb70f4865c8)\"" pod="openshift-multus/multus-qmhk6" podUID="7dcfc583-b6f2-415a-a4f0-adb70f4865c8" Feb 19 00:16:07 crc kubenswrapper[4889]: I0219 00:16:07.787963 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:07 crc kubenswrapper[4889]: I0219 00:16:07.788049 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:16:15 crc kubenswrapper[4889]: I0219 00:16:15.724888 4889 scope.go:117] "RemoveContainer" containerID="e6a5d8583a907ae88f80f4668dcf9c28a83c7fa6c71e1f663c6d1a05d56580cd" Feb 19 00:16:16 crc kubenswrapper[4889]: I0219 00:16:16.128608 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-qmhk6_7dcfc583-b6f2-415a-a4f0-adb70f4865c8/kube-multus/2.log" Feb 19 00:16:16 crc kubenswrapper[4889]: I0219 00:16:16.128973 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-qmhk6" event={"ID":"7dcfc583-b6f2-415a-a4f0-adb70f4865c8","Type":"ContainerStarted","Data":"7636c5d3781c87ee07b200232af42c5a10dcfe4c725ea924b9754c5734557e34"} Feb 19 00:16:20 crc kubenswrapper[4889]: I0219 00:16:20.764864 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-g9v74" Feb 19 00:16:37 crc kubenswrapper[4889]: I0219 00:16:37.781290 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:37 crc kubenswrapper[4889]: I0219 00:16:37.782279 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:16:37 crc kubenswrapper[4889]: I0219 00:16:37.782347 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:16:37 crc kubenswrapper[4889]: I0219 00:16:37.783205 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5344f45dff8eecfe6b1adf4750c6d3e2ab8740bd4015db965625ea2ee833f1a6"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:16:37 crc kubenswrapper[4889]: I0219 00:16:37.783294 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://5344f45dff8eecfe6b1adf4750c6d3e2ab8740bd4015db965625ea2ee833f1a6" gracePeriod=600 Feb 19 00:16:38 crc kubenswrapper[4889]: I0219 00:16:38.282425 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="5344f45dff8eecfe6b1adf4750c6d3e2ab8740bd4015db965625ea2ee833f1a6" exitCode=0 Feb 19 00:16:38 crc kubenswrapper[4889]: I0219 00:16:38.282537 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"5344f45dff8eecfe6b1adf4750c6d3e2ab8740bd4015db965625ea2ee833f1a6"} Feb 19 00:16:38 crc kubenswrapper[4889]: I0219 00:16:38.283064 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"307e06c1533d97412f05ea0d1d6dcd394440c81bd6cadf9dd2c0ed35a7f70904"} Feb 19 00:16:38 crc kubenswrapper[4889]: I0219 00:16:38.283095 4889 scope.go:117] "RemoveContainer" containerID="2231eeacede3ecd2782301c9a109e45033b30fb6e34ee69f5b69e52a119b5056" Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.562170 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsvjd"] Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.565628 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dsvjd" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="registry-server" containerID="cri-o://dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783" gracePeriod=30 Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.949728 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.954889 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd5vx\" (UniqueName: \"kubernetes.io/projected/19c688b4-e818-423b-8a30-79fc47b8671d-kube-api-access-gd5vx\") pod \"19c688b4-e818-423b-8a30-79fc47b8671d\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.955053 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-utilities\") pod \"19c688b4-e818-423b-8a30-79fc47b8671d\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.955124 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-catalog-content\") pod \"19c688b4-e818-423b-8a30-79fc47b8671d\" (UID: \"19c688b4-e818-423b-8a30-79fc47b8671d\") " Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.956138 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-utilities" (OuterVolumeSpecName: "utilities") pod "19c688b4-e818-423b-8a30-79fc47b8671d" (UID: "19c688b4-e818-423b-8a30-79fc47b8671d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:04 crc kubenswrapper[4889]: I0219 00:17:04.963768 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c688b4-e818-423b-8a30-79fc47b8671d-kube-api-access-gd5vx" (OuterVolumeSpecName: "kube-api-access-gd5vx") pod "19c688b4-e818-423b-8a30-79fc47b8671d" (UID: "19c688b4-e818-423b-8a30-79fc47b8671d"). InnerVolumeSpecName "kube-api-access-gd5vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:04.982094 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19c688b4-e818-423b-8a30-79fc47b8671d" (UID: "19c688b4-e818-423b-8a30-79fc47b8671d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.056388 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd5vx\" (UniqueName: \"kubernetes.io/projected/19c688b4-e818-423b-8a30-79fc47b8671d-kube-api-access-gd5vx\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.056419 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.056429 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19c688b4-e818-423b-8a30-79fc47b8671d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.470894 4889 generic.go:334] "Generic (PLEG): container finished" podID="19c688b4-e818-423b-8a30-79fc47b8671d" containerID="dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783" exitCode=0 Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.470974 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dsvjd" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.470992 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsvjd" event={"ID":"19c688b4-e818-423b-8a30-79fc47b8671d","Type":"ContainerDied","Data":"dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783"} Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.471068 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dsvjd" event={"ID":"19c688b4-e818-423b-8a30-79fc47b8671d","Type":"ContainerDied","Data":"286d6078977bf79ea707d9cad390a036e6c4b66b799d8fc1f2495393d1bb9698"} Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.471093 4889 scope.go:117] "RemoveContainer" containerID="dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.496402 4889 scope.go:117] "RemoveContainer" containerID="bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.514143 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsvjd"] Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.525098 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dsvjd"] Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.540924 4889 scope.go:117] "RemoveContainer" containerID="bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.556892 4889 scope.go:117] "RemoveContainer" containerID="dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783" Feb 19 00:17:05 crc kubenswrapper[4889]: E0219 00:17:05.557437 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783\": container with ID starting with dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783 not found: ID does not exist" containerID="dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.557485 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783"} err="failed to get container status \"dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783\": rpc error: code = NotFound desc = could not find container \"dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783\": container with ID starting with dd2cdb583699376bb04701df0391403a5cc30acfcfa7b6b3677f6f773e4ca783 not found: ID does not exist" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.557519 4889 scope.go:117] "RemoveContainer" containerID="bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005" Feb 19 00:17:05 crc kubenswrapper[4889]: E0219 00:17:05.557912 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005\": container with ID starting with bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005 not found: ID does not exist" containerID="bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.557950 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005"} err="failed to get container status \"bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005\": rpc error: code = NotFound desc = could not find container \"bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005\": container with ID starting with bfb0405dfaeb8ef48a243167fc5c1d6c1edd7cb8451589ad9fae2851a7eb0005 not found: ID does not exist" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.557970 4889 scope.go:117] "RemoveContainer" containerID="bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0" Feb 19 00:17:05 crc kubenswrapper[4889]: E0219 00:17:05.558267 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0\": container with ID starting with bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0 not found: ID does not exist" containerID="bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0" Feb 19 00:17:05 crc kubenswrapper[4889]: I0219 00:17:05.558294 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0"} err="failed to get container status \"bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0\": rpc error: code = NotFound desc = could not find container \"bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0\": container with ID starting with bdcc48090a35accc5aa8f4530312bfc7bf0536f137d15bbee3fd36b37335d8c0 not found: ID does not exist" Feb 19 00:17:06 crc kubenswrapper[4889]: I0219 00:17:06.751664 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" path="/var/lib/kubelet/pods/19c688b4-e818-423b-8a30-79fc47b8671d/volumes" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.572350 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc"] Feb 19 00:17:08 crc kubenswrapper[4889]: E0219 00:17:08.572773 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="extract-content" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.572797 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="extract-content" Feb 19 00:17:08 crc kubenswrapper[4889]: E0219 00:17:08.572815 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="registry-server" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.572825 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="registry-server" Feb 19 00:17:08 crc kubenswrapper[4889]: E0219 00:17:08.572847 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="extract-utilities" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.572859 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="extract-utilities" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.573018 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c688b4-e818-423b-8a30-79fc47b8671d" containerName="registry-server" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.574268 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.579909 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.582460 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc"] Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.702773 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.703408 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.703538 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmns\" (UniqueName: \"kubernetes.io/projected/cb97c079-c478-4174-833e-6c5c8422db49-kube-api-access-4tmns\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.804291 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tmns\" (UniqueName: \"kubernetes.io/projected/cb97c079-c478-4174-833e-6c5c8422db49-kube-api-access-4tmns\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.804356 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.804385 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.804838 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.804954 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.828871 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tmns\" (UniqueName: \"kubernetes.io/projected/cb97c079-c478-4174-833e-6c5c8422db49-kube-api-access-4tmns\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:08 crc kubenswrapper[4889]: I0219 00:17:08.893751 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:09 crc kubenswrapper[4889]: I0219 00:17:09.092346 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc"] Feb 19 00:17:09 crc kubenswrapper[4889]: I0219 00:17:09.495558 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" event={"ID":"cb97c079-c478-4174-833e-6c5c8422db49","Type":"ContainerStarted","Data":"7030a43a7307625fcfdeceebeaa759031d56828769a8338e89bd8c2cf590431b"} Feb 19 00:17:09 crc kubenswrapper[4889]: I0219 00:17:09.496063 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" event={"ID":"cb97c079-c478-4174-833e-6c5c8422db49","Type":"ContainerStarted","Data":"ae6c3ff4292c137917f9e19b9b5e5cc93d3d49ebb96e9a20d3f0e332be13e030"} Feb 19 00:17:10 crc kubenswrapper[4889]: I0219 00:17:10.504918 4889 generic.go:334] "Generic (PLEG): container finished" podID="cb97c079-c478-4174-833e-6c5c8422db49" containerID="7030a43a7307625fcfdeceebeaa759031d56828769a8338e89bd8c2cf590431b" exitCode=0 Feb 19 00:17:10 crc kubenswrapper[4889]: I0219 00:17:10.504979 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" event={"ID":"cb97c079-c478-4174-833e-6c5c8422db49","Type":"ContainerDied","Data":"7030a43a7307625fcfdeceebeaa759031d56828769a8338e89bd8c2cf590431b"} Feb 19 00:17:10 crc kubenswrapper[4889]: I0219 00:17:10.508011 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:17:12 crc kubenswrapper[4889]: I0219 00:17:12.521104 4889 generic.go:334] "Generic (PLEG): container finished" podID="cb97c079-c478-4174-833e-6c5c8422db49" containerID="c18db37cd676ec7ad28bb97ba54935e5862e4726d18b58d2686574030ee88111" exitCode=0 Feb 19 00:17:12 crc kubenswrapper[4889]: I0219 00:17:12.521295 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" event={"ID":"cb97c079-c478-4174-833e-6c5c8422db49","Type":"ContainerDied","Data":"c18db37cd676ec7ad28bb97ba54935e5862e4726d18b58d2686574030ee88111"} Feb 19 00:17:13 crc kubenswrapper[4889]: I0219 00:17:13.529993 4889 generic.go:334] "Generic (PLEG): container finished" podID="cb97c079-c478-4174-833e-6c5c8422db49" containerID="547c673c73068255a4879bdbc36cd6555336bf6e28998d10b3239e3d77ea36ac" exitCode=0 Feb 19 00:17:13 crc kubenswrapper[4889]: I0219 00:17:13.530065 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" event={"ID":"cb97c079-c478-4174-833e-6c5c8422db49","Type":"ContainerDied","Data":"547c673c73068255a4879bdbc36cd6555336bf6e28998d10b3239e3d77ea36ac"} Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.780776 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.784037 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-util\") pod \"cb97c079-c478-4174-833e-6c5c8422db49\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.784103 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-bundle\") pod \"cb97c079-c478-4174-833e-6c5c8422db49\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.784148 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tmns\" (UniqueName: \"kubernetes.io/projected/cb97c079-c478-4174-833e-6c5c8422db49-kube-api-access-4tmns\") pod \"cb97c079-c478-4174-833e-6c5c8422db49\" (UID: \"cb97c079-c478-4174-833e-6c5c8422db49\") " Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.786669 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-bundle" (OuterVolumeSpecName: "bundle") pod "cb97c079-c478-4174-833e-6c5c8422db49" (UID: "cb97c079-c478-4174-833e-6c5c8422db49"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.792366 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb97c079-c478-4174-833e-6c5c8422db49-kube-api-access-4tmns" (OuterVolumeSpecName: "kube-api-access-4tmns") pod "cb97c079-c478-4174-833e-6c5c8422db49" (UID: "cb97c079-c478-4174-833e-6c5c8422db49"). InnerVolumeSpecName "kube-api-access-4tmns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.804020 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-util" (OuterVolumeSpecName: "util") pod "cb97c079-c478-4174-833e-6c5c8422db49" (UID: "cb97c079-c478-4174-833e-6c5c8422db49"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.884994 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.885039 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tmns\" (UniqueName: \"kubernetes.io/projected/cb97c079-c478-4174-833e-6c5c8422db49-kube-api-access-4tmns\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.885052 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cb97c079-c478-4174-833e-6c5c8422db49-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.970016 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6"] Feb 19 00:17:14 crc kubenswrapper[4889]: E0219 00:17:14.970382 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="extract" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.970403 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="extract" Feb 19 00:17:14 crc kubenswrapper[4889]: E0219 00:17:14.970425 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="util" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.970436 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="util" Feb 19 00:17:14 crc kubenswrapper[4889]: E0219 00:17:14.970455 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="pull" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.970464 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="pull" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.970571 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb97c079-c478-4174-833e-6c5c8422db49" containerName="extract" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.971485 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.980133 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6"] Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.986252 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.986308 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:14 crc kubenswrapper[4889]: I0219 00:17:14.986369 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7h8s\" (UniqueName: \"kubernetes.io/projected/566b088f-3462-4687-bd78-db2a2cf860cb-kube-api-access-x7h8s\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.087914 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.087987 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.088024 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7h8s\" (UniqueName: \"kubernetes.io/projected/566b088f-3462-4687-bd78-db2a2cf860cb-kube-api-access-x7h8s\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.088629 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.088740 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.107039 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7h8s\" (UniqueName: \"kubernetes.io/projected/566b088f-3462-4687-bd78-db2a2cf860cb-kube-api-access-x7h8s\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.334472 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.547627 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" event={"ID":"cb97c079-c478-4174-833e-6c5c8422db49","Type":"ContainerDied","Data":"ae6c3ff4292c137917f9e19b9b5e5cc93d3d49ebb96e9a20d3f0e332be13e030"} Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.547682 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae6c3ff4292c137917f9e19b9b5e5cc93d3d49ebb96e9a20d3f0e332be13e030" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.547768 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.559047 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6"] Feb 19 00:17:15 crc kubenswrapper[4889]: W0219 00:17:15.567154 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod566b088f_3462_4687_bd78_db2a2cf860cb.slice/crio-c1755d481ef43c228b23f233d0a3f4f138bdffcf584c7201d5604cf831d4802d WatchSource:0}: Error finding container c1755d481ef43c228b23f233d0a3f4f138bdffcf584c7201d5604cf831d4802d: Status 404 returned error can't find the container with id c1755d481ef43c228b23f233d0a3f4f138bdffcf584c7201d5604cf831d4802d Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.768170 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj"] Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.771978 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.775445 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj"] Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.899672 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s94j\" (UniqueName: \"kubernetes.io/projected/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-kube-api-access-9s94j\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.900148 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:15 crc kubenswrapper[4889]: I0219 00:17:15.900264 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.002407 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.002477 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.002527 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s94j\" (UniqueName: \"kubernetes.io/projected/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-kube-api-access-9s94j\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.003260 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.003417 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.026331 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s94j\" (UniqueName: \"kubernetes.io/projected/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-kube-api-access-9s94j\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.159885 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.556243 4889 generic.go:334] "Generic (PLEG): container finished" podID="566b088f-3462-4687-bd78-db2a2cf860cb" containerID="598cce1617cc56e312ef776d9933da5ba3f73200950c36c2c7806dcc78fe8e4c" exitCode=0 Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.556319 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" event={"ID":"566b088f-3462-4687-bd78-db2a2cf860cb","Type":"ContainerDied","Data":"598cce1617cc56e312ef776d9933da5ba3f73200950c36c2c7806dcc78fe8e4c"} Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.556868 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" event={"ID":"566b088f-3462-4687-bd78-db2a2cf860cb","Type":"ContainerStarted","Data":"c1755d481ef43c228b23f233d0a3f4f138bdffcf584c7201d5604cf831d4802d"} Feb 19 00:17:16 crc kubenswrapper[4889]: I0219 00:17:16.614832 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj"] Feb 19 00:17:16 crc kubenswrapper[4889]: W0219 00:17:16.626409 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac4eb3b1_fdd6_46dc_bfb9_9a7f2edac251.slice/crio-4e7dec7261a3e1da25c3fa62fd63ff97dfea13a91cdcb8db1541a1ddd76c2487 WatchSource:0}: Error finding container 4e7dec7261a3e1da25c3fa62fd63ff97dfea13a91cdcb8db1541a1ddd76c2487: Status 404 returned error can't find the container with id 4e7dec7261a3e1da25c3fa62fd63ff97dfea13a91cdcb8db1541a1ddd76c2487 Feb 19 00:17:17 crc kubenswrapper[4889]: I0219 00:17:17.567691 4889 generic.go:334] "Generic (PLEG): container finished" podID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerID="f5dba8344f986fbc8832a235faf44e145b3f5617de1878bfce1a4125e616c819" exitCode=0 Feb 19 00:17:17 crc kubenswrapper[4889]: I0219 00:17:17.567736 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" event={"ID":"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251","Type":"ContainerDied","Data":"f5dba8344f986fbc8832a235faf44e145b3f5617de1878bfce1a4125e616c819"} Feb 19 00:17:17 crc kubenswrapper[4889]: I0219 00:17:17.568116 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" event={"ID":"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251","Type":"ContainerStarted","Data":"4e7dec7261a3e1da25c3fa62fd63ff97dfea13a91cdcb8db1541a1ddd76c2487"} Feb 19 00:17:18 crc kubenswrapper[4889]: I0219 00:17:18.576492 4889 generic.go:334] "Generic (PLEG): container finished" podID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerID="99b07a0712367e9b45bcc5e33b8569f0d4d6c879c5ec444600b10846ee462fed" exitCode=0 Feb 19 00:17:18 crc kubenswrapper[4889]: I0219 00:17:18.576569 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" event={"ID":"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251","Type":"ContainerDied","Data":"99b07a0712367e9b45bcc5e33b8569f0d4d6c879c5ec444600b10846ee462fed"} Feb 19 00:17:20 crc kubenswrapper[4889]: I0219 00:17:20.054285 4889 generic.go:334] "Generic (PLEG): container finished" podID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerID="342d3a680eba09cce1ddac20fdfe3889abcca2a25fd348eeef48c8ac7c9b52bd" exitCode=0 Feb 19 00:17:20 crc kubenswrapper[4889]: I0219 00:17:20.055003 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" event={"ID":"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251","Type":"ContainerDied","Data":"342d3a680eba09cce1ddac20fdfe3889abcca2a25fd348eeef48c8ac7c9b52bd"} Feb 19 00:17:20 crc kubenswrapper[4889]: I0219 00:17:20.064306 4889 generic.go:334] "Generic (PLEG): container finished" podID="566b088f-3462-4687-bd78-db2a2cf860cb" containerID="5cbd8419b3562ef321ea6cfd51ab9f895372d660ffd59585b67ba46f8ea84737" exitCode=0 Feb 19 00:17:20 crc kubenswrapper[4889]: I0219 00:17:20.064367 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" event={"ID":"566b088f-3462-4687-bd78-db2a2cf860cb","Type":"ContainerDied","Data":"5cbd8419b3562ef321ea6cfd51ab9f895372d660ffd59585b67ba46f8ea84737"} Feb 19 00:17:21 crc kubenswrapper[4889]: I0219 00:17:21.198847 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" event={"ID":"566b088f-3462-4687-bd78-db2a2cf860cb","Type":"ContainerStarted","Data":"64bbee43910dae4d090a0cb4d8820bd636c8432448f0d4cafda8c024d59cccba"} Feb 19 00:17:21 crc kubenswrapper[4889]: I0219 00:17:21.252840 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" podStartSLOduration=4.797048033 podStartE2EDuration="7.252818982s" podCreationTimestamp="2026-02-19 00:17:14 +0000 UTC" firstStartedPulling="2026-02-19 00:17:16.558605272 +0000 UTC m=+662.523270283" lastFinishedPulling="2026-02-19 00:17:19.014376241 +0000 UTC m=+664.979041232" observedRunningTime="2026-02-19 00:17:21.251334533 +0000 UTC m=+667.215999524" watchObservedRunningTime="2026-02-19 00:17:21.252818982 +0000 UTC m=+667.217483973" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.217351 4889 generic.go:334] "Generic (PLEG): container finished" podID="566b088f-3462-4687-bd78-db2a2cf860cb" containerID="64bbee43910dae4d090a0cb4d8820bd636c8432448f0d4cafda8c024d59cccba" exitCode=0 Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.217407 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" event={"ID":"566b088f-3462-4687-bd78-db2a2cf860cb","Type":"ContainerDied","Data":"64bbee43910dae4d090a0cb4d8820bd636c8432448f0d4cafda8c024d59cccba"} Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.557006 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.610241 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-bundle\") pod \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.610332 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-util\") pod \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.610357 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s94j\" (UniqueName: \"kubernetes.io/projected/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-kube-api-access-9s94j\") pod \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\" (UID: \"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251\") " Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.612340 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-bundle" (OuterVolumeSpecName: "bundle") pod "ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" (UID: "ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.622388 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-kube-api-access-9s94j" (OuterVolumeSpecName: "kube-api-access-9s94j") pod "ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" (UID: "ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251"). InnerVolumeSpecName "kube-api-access-9s94j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.640883 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-util" (OuterVolumeSpecName: "util") pod "ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" (UID: "ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.711489 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.711520 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s94j\" (UniqueName: \"kubernetes.io/projected/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-kube-api-access-9s94j\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:22 crc kubenswrapper[4889]: I0219 00:17:22.711532 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.266532 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.266617 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj" event={"ID":"ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251","Type":"ContainerDied","Data":"4e7dec7261a3e1da25c3fa62fd63ff97dfea13a91cdcb8db1541a1ddd76c2487"} Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.266676 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e7dec7261a3e1da25c3fa62fd63ff97dfea13a91cdcb8db1541a1ddd76c2487" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.304587 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv"] Feb 19 00:17:23 crc kubenswrapper[4889]: E0219 00:17:23.304959 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="util" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.304984 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="util" Feb 19 00:17:23 crc kubenswrapper[4889]: E0219 00:17:23.305001 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="extract" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.305010 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="extract" Feb 19 00:17:23 crc kubenswrapper[4889]: E0219 00:17:23.305027 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="pull" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.305036 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="pull" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.305176 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251" containerName="extract" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.306176 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.336940 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv"] Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.454206 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8948n\" (UniqueName: \"kubernetes.io/projected/2c641527-033b-418b-9aba-d399b711acaf-kube-api-access-8948n\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.454310 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.454365 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.556364 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8948n\" (UniqueName: \"kubernetes.io/projected/2c641527-033b-418b-9aba-d399b711acaf-kube-api-access-8948n\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.556460 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.556534 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.557290 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.557441 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.623411 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8948n\" (UniqueName: \"kubernetes.io/projected/2c641527-033b-418b-9aba-d399b711acaf-kube-api-access-8948n\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:23 crc kubenswrapper[4889]: I0219 00:17:23.635670 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.586873 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.827914 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7h8s\" (UniqueName: \"kubernetes.io/projected/566b088f-3462-4687-bd78-db2a2cf860cb-kube-api-access-x7h8s\") pod \"566b088f-3462-4687-bd78-db2a2cf860cb\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.828007 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-util\") pod \"566b088f-3462-4687-bd78-db2a2cf860cb\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.828128 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-bundle\") pod \"566b088f-3462-4687-bd78-db2a2cf860cb\" (UID: \"566b088f-3462-4687-bd78-db2a2cf860cb\") " Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.829040 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-bundle" (OuterVolumeSpecName: "bundle") pod "566b088f-3462-4687-bd78-db2a2cf860cb" (UID: "566b088f-3462-4687-bd78-db2a2cf860cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.844520 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/566b088f-3462-4687-bd78-db2a2cf860cb-kube-api-access-x7h8s" (OuterVolumeSpecName: "kube-api-access-x7h8s") pod "566b088f-3462-4687-bd78-db2a2cf860cb" (UID: "566b088f-3462-4687-bd78-db2a2cf860cb"). InnerVolumeSpecName "kube-api-access-x7h8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.848406 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-util" (OuterVolumeSpecName: "util") pod "566b088f-3462-4687-bd78-db2a2cf860cb" (UID: "566b088f-3462-4687-bd78-db2a2cf860cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.929385 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.929433 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/566b088f-3462-4687-bd78-db2a2cf860cb-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:24 crc kubenswrapper[4889]: I0219 00:17:24.929446 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7h8s\" (UniqueName: \"kubernetes.io/projected/566b088f-3462-4687-bd78-db2a2cf860cb-kube-api-access-x7h8s\") on node \"crc\" DevicePath \"\"" Feb 19 00:17:25 crc kubenswrapper[4889]: I0219 00:17:25.297631 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv"] Feb 19 00:17:25 crc kubenswrapper[4889]: W0219 00:17:25.320508 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c641527_033b_418b_9aba_d399b711acaf.slice/crio-9b01bc5ef5192887ce2ff8e4013cca6c979c61f993c84694eb17a4373f6d0287 WatchSource:0}: Error finding container 9b01bc5ef5192887ce2ff8e4013cca6c979c61f993c84694eb17a4373f6d0287: Status 404 returned error can't find the container with id 9b01bc5ef5192887ce2ff8e4013cca6c979c61f993c84694eb17a4373f6d0287 Feb 19 00:17:25 crc kubenswrapper[4889]: I0219 00:17:25.474635 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" event={"ID":"2c641527-033b-418b-9aba-d399b711acaf","Type":"ContainerStarted","Data":"9b01bc5ef5192887ce2ff8e4013cca6c979c61f993c84694eb17a4373f6d0287"} Feb 19 00:17:25 crc kubenswrapper[4889]: I0219 00:17:25.478965 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" event={"ID":"566b088f-3462-4687-bd78-db2a2cf860cb","Type":"ContainerDied","Data":"c1755d481ef43c228b23f233d0a3f4f138bdffcf584c7201d5604cf831d4802d"} Feb 19 00:17:25 crc kubenswrapper[4889]: I0219 00:17:25.479018 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1755d481ef43c228b23f233d0a3f4f138bdffcf584c7201d5604cf831d4802d" Feb 19 00:17:25 crc kubenswrapper[4889]: I0219 00:17:25.479022 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6" Feb 19 00:17:26 crc kubenswrapper[4889]: I0219 00:17:26.486263 4889 generic.go:334] "Generic (PLEG): container finished" podID="2c641527-033b-418b-9aba-d399b711acaf" containerID="aa5d0a78b4d8710acbd4e71effd39f582467a3b3a730df81b160b59d95a685ea" exitCode=0 Feb 19 00:17:26 crc kubenswrapper[4889]: I0219 00:17:26.486368 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" event={"ID":"2c641527-033b-418b-9aba-d399b711acaf","Type":"ContainerDied","Data":"aa5d0a78b4d8710acbd4e71effd39f582467a3b3a730df81b160b59d95a685ea"} Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.525037 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7"] Feb 19 00:17:27 crc kubenswrapper[4889]: E0219 00:17:27.531282 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="extract" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.531451 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="extract" Feb 19 00:17:27 crc kubenswrapper[4889]: E0219 00:17:27.531539 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="util" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.531610 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="util" Feb 19 00:17:27 crc kubenswrapper[4889]: E0219 00:17:27.531702 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="pull" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.531798 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="pull" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.532084 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="566b088f-3462-4687-bd78-db2a2cf860cb" containerName="extract" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.533023 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.552065 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-55pws" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.552306 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.570359 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4fn\" (UniqueName: \"kubernetes.io/projected/211e50ac-fbca-44f5-8e00-d6462342ee96-kube-api-access-7s4fn\") pod \"obo-prometheus-operator-68bc856cb9-x78h7\" (UID: \"211e50ac-fbca-44f5-8e00-d6462342ee96\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.579406 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.611714 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.672162 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s4fn\" (UniqueName: \"kubernetes.io/projected/211e50ac-fbca-44f5-8e00-d6462342ee96-kube-api-access-7s4fn\") pod \"obo-prometheus-operator-68bc856cb9-x78h7\" (UID: \"211e50ac-fbca-44f5-8e00-d6462342ee96\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.684300 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.685253 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.689981 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.690053 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.690415 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qszjh" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.695746 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.726464 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s4fn\" (UniqueName: \"kubernetes.io/projected/211e50ac-fbca-44f5-8e00-d6462342ee96-kube-api-access-7s4fn\") pod \"obo-prometheus-operator-68bc856cb9-x78h7\" (UID: \"211e50ac-fbca-44f5-8e00-d6462342ee96\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.755476 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.773261 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b1bacc-63b6-4446-a2b1-a1306e34f89c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84\" (UID: \"d7b1bacc-63b6-4446-a2b1-a1306e34f89c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.773338 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b1bacc-63b6-4446-a2b1-a1306e34f89c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84\" (UID: \"d7b1bacc-63b6-4446-a2b1-a1306e34f89c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.773416 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9ab512a-6196-4d62-a32d-b869b3d080bf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv\" (UID: \"e9ab512a-6196-4d62-a32d-b869b3d080bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.773471 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9ab512a-6196-4d62-a32d-b869b3d080bf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv\" (UID: \"e9ab512a-6196-4d62-a32d-b869b3d080bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.808467 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.854880 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-q88r4"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.856165 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.861566 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-dlhhp" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.861858 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.880212 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9ab512a-6196-4d62-a32d-b869b3d080bf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv\" (UID: \"e9ab512a-6196-4d62-a32d-b869b3d080bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.880311 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b1bacc-63b6-4446-a2b1-a1306e34f89c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84\" (UID: \"d7b1bacc-63b6-4446-a2b1-a1306e34f89c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.880347 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b1bacc-63b6-4446-a2b1-a1306e34f89c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84\" (UID: \"d7b1bacc-63b6-4446-a2b1-a1306e34f89c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.881817 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9ab512a-6196-4d62-a32d-b869b3d080bf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv\" (UID: \"e9ab512a-6196-4d62-a32d-b869b3d080bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.888128 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e9ab512a-6196-4d62-a32d-b869b3d080bf-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv\" (UID: \"e9ab512a-6196-4d62-a32d-b869b3d080bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.897228 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7b1bacc-63b6-4446-a2b1-a1306e34f89c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84\" (UID: \"d7b1bacc-63b6-4446-a2b1-a1306e34f89c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.898943 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7b1bacc-63b6-4446-a2b1-a1306e34f89c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84\" (UID: \"d7b1bacc-63b6-4446-a2b1-a1306e34f89c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.899470 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e9ab512a-6196-4d62-a32d-b869b3d080bf-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv\" (UID: \"e9ab512a-6196-4d62-a32d-b869b3d080bf\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.904856 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-q88r4"] Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.907497 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.984301 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5ktx\" (UniqueName: \"kubernetes.io/projected/e398c065-2809-4a64-9ccc-801f6eb7d8b7-kube-api-access-p5ktx\") pod \"observability-operator-59bdc8b94-q88r4\" (UID: \"e398c065-2809-4a64-9ccc-801f6eb7d8b7\") " pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:27 crc kubenswrapper[4889]: I0219 00:17:27.984382 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e398c065-2809-4a64-9ccc-801f6eb7d8b7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-q88r4\" (UID: \"e398c065-2809-4a64-9ccc-801f6eb7d8b7\") " pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.011502 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tr5l7"] Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.012533 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.016867 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-895qm" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.026580 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.030366 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tr5l7"] Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.049168 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.087465 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5ktx\" (UniqueName: \"kubernetes.io/projected/e398c065-2809-4a64-9ccc-801f6eb7d8b7-kube-api-access-p5ktx\") pod \"observability-operator-59bdc8b94-q88r4\" (UID: \"e398c065-2809-4a64-9ccc-801f6eb7d8b7\") " pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.087530 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0184625-e425-46d0-ab8a-13da9ced8a6f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tr5l7\" (UID: \"e0184625-e425-46d0-ab8a-13da9ced8a6f\") " pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.087606 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e398c065-2809-4a64-9ccc-801f6eb7d8b7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-q88r4\" (UID: \"e398c065-2809-4a64-9ccc-801f6eb7d8b7\") " pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.087653 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94p6v\" (UniqueName: \"kubernetes.io/projected/e0184625-e425-46d0-ab8a-13da9ced8a6f-kube-api-access-94p6v\") pod \"perses-operator-5bf474d74f-tr5l7\" (UID: \"e0184625-e425-46d0-ab8a-13da9ced8a6f\") " pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.109618 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e398c065-2809-4a64-9ccc-801f6eb7d8b7-observability-operator-tls\") pod \"observability-operator-59bdc8b94-q88r4\" (UID: \"e398c065-2809-4a64-9ccc-801f6eb7d8b7\") " pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.134490 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5ktx\" (UniqueName: \"kubernetes.io/projected/e398c065-2809-4a64-9ccc-801f6eb7d8b7-kube-api-access-p5ktx\") pod \"observability-operator-59bdc8b94-q88r4\" (UID: \"e398c065-2809-4a64-9ccc-801f6eb7d8b7\") " pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.178872 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.188533 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0184625-e425-46d0-ab8a-13da9ced8a6f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tr5l7\" (UID: \"e0184625-e425-46d0-ab8a-13da9ced8a6f\") " pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.188653 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94p6v\" (UniqueName: \"kubernetes.io/projected/e0184625-e425-46d0-ab8a-13da9ced8a6f-kube-api-access-94p6v\") pod \"perses-operator-5bf474d74f-tr5l7\" (UID: \"e0184625-e425-46d0-ab8a-13da9ced8a6f\") " pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.190188 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0184625-e425-46d0-ab8a-13da9ced8a6f-openshift-service-ca\") pod \"perses-operator-5bf474d74f-tr5l7\" (UID: \"e0184625-e425-46d0-ab8a-13da9ced8a6f\") " pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.224566 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94p6v\" (UniqueName: \"kubernetes.io/projected/e0184625-e425-46d0-ab8a-13da9ced8a6f-kube-api-access-94p6v\") pod \"perses-operator-5bf474d74f-tr5l7\" (UID: \"e0184625-e425-46d0-ab8a-13da9ced8a6f\") " pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.455273 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:17:28 crc kubenswrapper[4889]: I0219 00:17:28.935182 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7"] Feb 19 00:17:28 crc kubenswrapper[4889]: W0219 00:17:28.946513 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod211e50ac_fbca_44f5_8e00_d6462342ee96.slice/crio-6452d850982f49c549b9a3406bbe98263c187abcc4a5a232b94fd300c0634d9a WatchSource:0}: Error finding container 6452d850982f49c549b9a3406bbe98263c187abcc4a5a232b94fd300c0634d9a: Status 404 returned error can't find the container with id 6452d850982f49c549b9a3406bbe98263c187abcc4a5a232b94fd300c0634d9a Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.104824 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84"] Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.165448 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-q88r4"] Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.181552 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv"] Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.462061 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-tr5l7"] Feb 19 00:17:29 crc kubenswrapper[4889]: W0219 00:17:29.478468 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0184625_e425_46d0_ab8a_13da9ced8a6f.slice/crio-12266848bbdddee9a65637f690118354e2ddfa3f2b6d918aa73040e6b03d71ad WatchSource:0}: Error finding container 12266848bbdddee9a65637f690118354e2ddfa3f2b6d918aa73040e6b03d71ad: Status 404 returned error can't find the container with id 12266848bbdddee9a65637f690118354e2ddfa3f2b6d918aa73040e6b03d71ad Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.549891 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" event={"ID":"e0184625-e425-46d0-ab8a-13da9ced8a6f","Type":"ContainerStarted","Data":"12266848bbdddee9a65637f690118354e2ddfa3f2b6d918aa73040e6b03d71ad"} Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.552509 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" event={"ID":"d7b1bacc-63b6-4446-a2b1-a1306e34f89c","Type":"ContainerStarted","Data":"c86d0086f3cea8db6d7abd02060c1cce3af4f287f4f118280dd9d15d298fbfc7"} Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.554157 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" event={"ID":"211e50ac-fbca-44f5-8e00-d6462342ee96","Type":"ContainerStarted","Data":"6452d850982f49c549b9a3406bbe98263c187abcc4a5a232b94fd300c0634d9a"} Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.555785 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" event={"ID":"e398c065-2809-4a64-9ccc-801f6eb7d8b7","Type":"ContainerStarted","Data":"77158a90ba9dbe337ba4358b08dd207ec638d79f0933f3810d5375cd682eeafc"} Feb 19 00:17:29 crc kubenswrapper[4889]: I0219 00:17:29.563671 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" event={"ID":"e9ab512a-6196-4d62-a32d-b869b3d080bf","Type":"ContainerStarted","Data":"d5f3a6ae90cdc238eb77aced0941c0ec1e3c55736aefa21d87a99410d8d10757"} Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.775644 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xn49g"] Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.777331 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.843887 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5862\" (UniqueName: \"kubernetes.io/projected/8d857298-bbb6-403d-85b9-778150669d32-kube-api-access-z5862\") pod \"interconnect-operator-5bb49f789d-xn49g\" (UID: \"8d857298-bbb6-403d-85b9-778150669d32\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.873804 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.874041 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.874188 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-tqnf4" Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.898681 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xn49g"] Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.945294 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5862\" (UniqueName: \"kubernetes.io/projected/8d857298-bbb6-403d-85b9-778150669d32-kube-api-access-z5862\") pod \"interconnect-operator-5bb49f789d-xn49g\" (UID: \"8d857298-bbb6-403d-85b9-778150669d32\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" Feb 19 00:17:31 crc kubenswrapper[4889]: I0219 00:17:31.990230 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5862\" (UniqueName: \"kubernetes.io/projected/8d857298-bbb6-403d-85b9-778150669d32-kube-api-access-z5862\") pod \"interconnect-operator-5bb49f789d-xn49g\" (UID: \"8d857298-bbb6-403d-85b9-778150669d32\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" Feb 19 00:17:32 crc kubenswrapper[4889]: I0219 00:17:32.202559 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" Feb 19 00:17:33 crc kubenswrapper[4889]: I0219 00:17:33.904843 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-55b76ddcbf-c8spm"] Feb 19 00:17:33 crc kubenswrapper[4889]: I0219 00:17:33.905953 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:33 crc kubenswrapper[4889]: I0219 00:17:33.910761 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-nwt8t" Feb 19 00:17:33 crc kubenswrapper[4889]: I0219 00:17:33.911055 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 19 00:17:33 crc kubenswrapper[4889]: I0219 00:17:33.926387 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-55b76ddcbf-c8spm"] Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.003700 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf5ed595-96e5-4013-bf1d-e1256b09c206-webhook-cert\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.003806 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zpc\" (UniqueName: \"kubernetes.io/projected/cf5ed595-96e5-4013-bf1d-e1256b09c206-kube-api-access-c4zpc\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.003900 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf5ed595-96e5-4013-bf1d-e1256b09c206-apiservice-cert\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.105240 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zpc\" (UniqueName: \"kubernetes.io/projected/cf5ed595-96e5-4013-bf1d-e1256b09c206-kube-api-access-c4zpc\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.105341 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf5ed595-96e5-4013-bf1d-e1256b09c206-apiservice-cert\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.105389 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf5ed595-96e5-4013-bf1d-e1256b09c206-webhook-cert\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.114249 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf5ed595-96e5-4013-bf1d-e1256b09c206-webhook-cert\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.128567 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf5ed595-96e5-4013-bf1d-e1256b09c206-apiservice-cert\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.146026 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zpc\" (UniqueName: \"kubernetes.io/projected/cf5ed595-96e5-4013-bf1d-e1256b09c206-kube-api-access-c4zpc\") pod \"elastic-operator-55b76ddcbf-c8spm\" (UID: \"cf5ed595-96e5-4013-bf1d-e1256b09c206\") " pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:34 crc kubenswrapper[4889]: I0219 00:17:34.295000 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" Feb 19 00:17:41 crc kubenswrapper[4889]: E0219 00:17:41.382760 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" Feb 19 00:17:41 crc kubenswrapper[4889]: E0219 00:17:41.383554 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:pull,Image:registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908,Command:[/util/cpb /bundle],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bundle,ReadOnly:false,MountPath:/bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:util,ReadOnly:false,MountPath:/util,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8948n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_openshift-marketplace(2c641527-033b-418b-9aba-d399b711acaf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 00:17:41 crc kubenswrapper[4889]: E0219 00:17:41.384803 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" podUID="2c641527-033b-418b-9aba-d399b711acaf" Feb 19 00:17:42 crc kubenswrapper[4889]: E0219 00:17:41.883198 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pull\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908\\\"\"" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" podUID="2c641527-033b-418b-9aba-d399b711acaf" Feb 19 00:17:48 crc kubenswrapper[4889]: E0219 00:17:48.355377 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Feb 19 00:17:48 crc kubenswrapper[4889]: E0219 00:17:48.356664 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94p6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-tr5l7_openshift-operators(e0184625-e425-46d0-ab8a-13da9ced8a6f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:17:48 crc kubenswrapper[4889]: E0219 00:17:48.358683 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" podUID="e0184625-e425-46d0-ab8a-13da9ced8a6f" Feb 19 00:17:48 crc kubenswrapper[4889]: E0219 00:17:48.937159 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" podUID="e0184625-e425-46d0-ab8a-13da9ced8a6f" Feb 19 00:17:49 crc kubenswrapper[4889]: E0219 00:17:49.966770 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 19 00:17:49 crc kubenswrapper[4889]: E0219 00:17:49.967514 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7s4fn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x78h7_openshift-operators(211e50ac-fbca-44f5-8e00-d6462342ee96): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:17:49 crc kubenswrapper[4889]: E0219 00:17:49.968921 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" podUID="211e50ac-fbca-44f5-8e00-d6462342ee96" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.554662 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.555134 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv_openshift-operators(e9ab512a-6196-4d62-a32d-b869b3d080bf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.556341 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" podUID="e9ab512a-6196-4d62-a32d-b869b3d080bf" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.589738 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.590582 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84_openshift-operators(d7b1bacc-63b6-4446-a2b1-a1306e34f89c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.591885 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" podUID="d7b1bacc-63b6-4446-a2b1-a1306e34f89c" Feb 19 00:17:50 crc kubenswrapper[4889]: I0219 00:17:50.957931 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-55b76ddcbf-c8spm"] Feb 19 00:17:50 crc kubenswrapper[4889]: I0219 00:17:50.958484 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" event={"ID":"e398c065-2809-4a64-9ccc-801f6eb7d8b7","Type":"ContainerStarted","Data":"77e4fba5da2cda49196686ced1b02c5c59f30e4683c8be4fb611a8e4b0da7257"} Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.963650 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" podUID="d7b1bacc-63b6-4446-a2b1-a1306e34f89c" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.963941 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" podUID="211e50ac-fbca-44f5-8e00-d6462342ee96" Feb 19 00:17:50 crc kubenswrapper[4889]: E0219 00:17:50.964016 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" podUID="e9ab512a-6196-4d62-a32d-b869b3d080bf" Feb 19 00:17:50 crc kubenswrapper[4889]: W0219 00:17:50.974283 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf5ed595_96e5_4013_bf1d_e1256b09c206.slice/crio-21517940276b139a99cdb514026d8ee910fa1adcc6603b8388c0bea0bf1b8080 WatchSource:0}: Error finding container 21517940276b139a99cdb514026d8ee910fa1adcc6603b8388c0bea0bf1b8080: Status 404 returned error can't find the container with id 21517940276b139a99cdb514026d8ee910fa1adcc6603b8388c0bea0bf1b8080 Feb 19 00:17:51 crc kubenswrapper[4889]: I0219 00:17:51.052865 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" podStartSLOduration=2.66033382 podStartE2EDuration="24.052839818s" podCreationTimestamp="2026-02-19 00:17:27 +0000 UTC" firstStartedPulling="2026-02-19 00:17:29.222343636 +0000 UTC m=+675.187008627" lastFinishedPulling="2026-02-19 00:17:50.614849634 +0000 UTC m=+696.579514625" observedRunningTime="2026-02-19 00:17:51.050355168 +0000 UTC m=+697.015020159" watchObservedRunningTime="2026-02-19 00:17:51.052839818 +0000 UTC m=+697.017504809" Feb 19 00:17:51 crc kubenswrapper[4889]: I0219 00:17:51.069931 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xn49g"] Feb 19 00:17:51 crc kubenswrapper[4889]: W0219 00:17:51.080199 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d857298_bbb6_403d_85b9_778150669d32.slice/crio-df2f97372ba9200bde0f6e37ec3f9f85a808ab2cf2fb82affe94da45d6cc2b40 WatchSource:0}: Error finding container df2f97372ba9200bde0f6e37ec3f9f85a808ab2cf2fb82affe94da45d6cc2b40: Status 404 returned error can't find the container with id df2f97372ba9200bde0f6e37ec3f9f85a808ab2cf2fb82affe94da45d6cc2b40 Feb 19 00:17:51 crc kubenswrapper[4889]: I0219 00:17:51.965838 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" event={"ID":"cf5ed595-96e5-4013-bf1d-e1256b09c206","Type":"ContainerStarted","Data":"21517940276b139a99cdb514026d8ee910fa1adcc6603b8388c0bea0bf1b8080"} Feb 19 00:17:51 crc kubenswrapper[4889]: I0219 00:17:51.970375 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" event={"ID":"8d857298-bbb6-403d-85b9-778150669d32","Type":"ContainerStarted","Data":"df2f97372ba9200bde0f6e37ec3f9f85a808ab2cf2fb82affe94da45d6cc2b40"} Feb 19 00:17:51 crc kubenswrapper[4889]: I0219 00:17:51.970736 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:51 crc kubenswrapper[4889]: I0219 00:17:51.988896 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-q88r4" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.021130 4889 generic.go:334] "Generic (PLEG): container finished" podID="2c641527-033b-418b-9aba-d399b711acaf" containerID="3cff3d87dea67d817f7688da0b50c5ebe5f2262d600d1468728bf9d05ee15e50" exitCode=0 Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.021327 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" event={"ID":"2c641527-033b-418b-9aba-d399b711acaf","Type":"ContainerDied","Data":"3cff3d87dea67d817f7688da0b50c5ebe5f2262d600d1468728bf9d05ee15e50"} Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.034079 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" event={"ID":"cf5ed595-96e5-4013-bf1d-e1256b09c206","Type":"ContainerStarted","Data":"4af4ca1a940ef341a6563f79a9898d9198fd67da6c214b041ce7cf916505d507"} Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.074969 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-55b76ddcbf-c8spm" podStartSLOduration=18.517963909 podStartE2EDuration="23.074944806s" podCreationTimestamp="2026-02-19 00:17:33 +0000 UTC" firstStartedPulling="2026-02-19 00:17:50.988417337 +0000 UTC m=+696.953082328" lastFinishedPulling="2026-02-19 00:17:55.545398234 +0000 UTC m=+701.510063225" observedRunningTime="2026-02-19 00:17:56.07350253 +0000 UTC m=+702.038167521" watchObservedRunningTime="2026-02-19 00:17:56.074944806 +0000 UTC m=+702.039609797" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.795930 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.797811 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.804690 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-5bl49" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.804895 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.804995 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.805045 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.806879 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.806940 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.806871 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.808802 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.812803 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.823439 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.945077 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.945776 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.945818 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.945848 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.945868 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.945970 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946001 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946032 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946069 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946098 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946120 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946143 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946175 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ec73ead9-66c6-4de8-8def-ab772839b617-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946268 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:56 crc kubenswrapper[4889]: I0219 00:17:56.946293 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047336 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047393 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047422 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047446 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ec73ead9-66c6-4de8-8def-ab772839b617-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047474 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047506 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047536 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047567 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047599 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047623 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047643 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047708 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047728 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.047759 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.048391 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.049143 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.051894 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.052206 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.053637 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.055152 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.056126 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.056463 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ec73ead9-66c6-4de8-8def-ab772839b617-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.063006 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.063731 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.064602 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.067149 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.070524 4889 generic.go:334] "Generic (PLEG): container finished" podID="2c641527-033b-418b-9aba-d399b711acaf" containerID="146c96f58213901e9793348a768af9fac55bcff682b8416e86e601438356eb0f" exitCode=0 Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.071389 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" event={"ID":"2c641527-033b-418b-9aba-d399b711acaf","Type":"ContainerDied","Data":"146c96f58213901e9793348a768af9fac55bcff682b8416e86e601438356eb0f"} Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.078027 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.079173 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ec73ead9-66c6-4de8-8def-ab772839b617-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.079726 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ec73ead9-66c6-4de8-8def-ab772839b617-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ec73ead9-66c6-4de8-8def-ab772839b617\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:17:57 crc kubenswrapper[4889]: I0219 00:17:57.124574 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.755353 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.884017 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8948n\" (UniqueName: \"kubernetes.io/projected/2c641527-033b-418b-9aba-d399b711acaf-kube-api-access-8948n\") pod \"2c641527-033b-418b-9aba-d399b711acaf\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.884087 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-util\") pod \"2c641527-033b-418b-9aba-d399b711acaf\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.884252 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-bundle\") pod \"2c641527-033b-418b-9aba-d399b711acaf\" (UID: \"2c641527-033b-418b-9aba-d399b711acaf\") " Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.885301 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-bundle" (OuterVolumeSpecName: "bundle") pod "2c641527-033b-418b-9aba-d399b711acaf" (UID: "2c641527-033b-418b-9aba-d399b711acaf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.891673 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c641527-033b-418b-9aba-d399b711acaf-kube-api-access-8948n" (OuterVolumeSpecName: "kube-api-access-8948n") pod "2c641527-033b-418b-9aba-d399b711acaf" (UID: "2c641527-033b-418b-9aba-d399b711acaf"). InnerVolumeSpecName "kube-api-access-8948n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.899983 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-util" (OuterVolumeSpecName: "util") pod "2c641527-033b-418b-9aba-d399b711acaf" (UID: "2c641527-033b-418b-9aba-d399b711acaf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.988251 4889 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.988317 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8948n\" (UniqueName: \"kubernetes.io/projected/2c641527-033b-418b-9aba-d399b711acaf-kube-api-access-8948n\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:01 crc kubenswrapper[4889]: I0219 00:18:01.988340 4889 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2c641527-033b-418b-9aba-d399b711acaf-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:02 crc kubenswrapper[4889]: I0219 00:18:02.165844 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" event={"ID":"2c641527-033b-418b-9aba-d399b711acaf","Type":"ContainerDied","Data":"9b01bc5ef5192887ce2ff8e4013cca6c979c61f993c84694eb17a4373f6d0287"} Feb 19 00:18:02 crc kubenswrapper[4889]: I0219 00:18:02.165903 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b01bc5ef5192887ce2ff8e4013cca6c979c61f993c84694eb17a4373f6d0287" Feb 19 00:18:02 crc kubenswrapper[4889]: I0219 00:18:02.165955 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv" Feb 19 00:18:02 crc kubenswrapper[4889]: I0219 00:18:02.375261 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:18:02 crc kubenswrapper[4889]: W0219 00:18:02.384127 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec73ead9_66c6_4de8_8def_ab772839b617.slice/crio-31d0577f35f88a0f6fd5670733940a514339956b6d8ac6127afcc65ba6ca5b49 WatchSource:0}: Error finding container 31d0577f35f88a0f6fd5670733940a514339956b6d8ac6127afcc65ba6ca5b49: Status 404 returned error can't find the container with id 31d0577f35f88a0f6fd5670733940a514339956b6d8ac6127afcc65ba6ca5b49 Feb 19 00:18:03 crc kubenswrapper[4889]: I0219 00:18:03.176240 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" event={"ID":"8d857298-bbb6-403d-85b9-778150669d32","Type":"ContainerStarted","Data":"f9f99ccf54cb87c8fdc140a01bd4c5cc118a994b0c3dd49eb0c09be3038e0b37"} Feb 19 00:18:03 crc kubenswrapper[4889]: I0219 00:18:03.181206 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" event={"ID":"e0184625-e425-46d0-ab8a-13da9ced8a6f","Type":"ContainerStarted","Data":"02518793667bf35a794ddb6d5812bfb55cf9057efc93513af5a67e94c0bb3d90"} Feb 19 00:18:03 crc kubenswrapper[4889]: I0219 00:18:03.181642 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:18:03 crc kubenswrapper[4889]: I0219 00:18:03.182898 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ec73ead9-66c6-4de8-8def-ab772839b617","Type":"ContainerStarted","Data":"31d0577f35f88a0f6fd5670733940a514339956b6d8ac6127afcc65ba6ca5b49"} Feb 19 00:18:03 crc kubenswrapper[4889]: I0219 00:18:03.257494 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-xn49g" podStartSLOduration=21.11772235 podStartE2EDuration="32.257469115s" podCreationTimestamp="2026-02-19 00:17:31 +0000 UTC" firstStartedPulling="2026-02-19 00:17:51.082703193 +0000 UTC m=+697.047368184" lastFinishedPulling="2026-02-19 00:18:02.222449958 +0000 UTC m=+708.187114949" observedRunningTime="2026-02-19 00:18:03.2198609 +0000 UTC m=+709.184525891" watchObservedRunningTime="2026-02-19 00:18:03.257469115 +0000 UTC m=+709.222134106" Feb 19 00:18:03 crc kubenswrapper[4889]: I0219 00:18:03.745883 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" podStartSLOduration=3.624014343 podStartE2EDuration="36.745848117s" podCreationTimestamp="2026-02-19 00:17:27 +0000 UTC" firstStartedPulling="2026-02-19 00:17:29.482761446 +0000 UTC m=+675.447426437" lastFinishedPulling="2026-02-19 00:18:02.60459521 +0000 UTC m=+708.569260211" observedRunningTime="2026-02-19 00:18:03.258446536 +0000 UTC m=+709.223111527" watchObservedRunningTime="2026-02-19 00:18:03.745848117 +0000 UTC m=+709.710513108" Feb 19 00:18:05 crc kubenswrapper[4889]: I0219 00:18:05.254155 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" event={"ID":"211e50ac-fbca-44f5-8e00-d6462342ee96","Type":"ContainerStarted","Data":"74537d493393a3118afd42b8f42e0496ac0651f18baec4d570d8a9126f8c1827"} Feb 19 00:18:05 crc kubenswrapper[4889]: I0219 00:18:05.397817 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x78h7" podStartSLOduration=2.82536398 podStartE2EDuration="38.397791666s" podCreationTimestamp="2026-02-19 00:17:27 +0000 UTC" firstStartedPulling="2026-02-19 00:17:28.950134555 +0000 UTC m=+674.914799546" lastFinishedPulling="2026-02-19 00:18:04.522562241 +0000 UTC m=+710.487227232" observedRunningTime="2026-02-19 00:18:05.39109491 +0000 UTC m=+711.355759911" watchObservedRunningTime="2026-02-19 00:18:05.397791666 +0000 UTC m=+711.362456657" Feb 19 00:18:08 crc kubenswrapper[4889]: I0219 00:18:08.518201 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-tr5l7" Feb 19 00:18:12 crc kubenswrapper[4889]: I0219 00:18:12.307328 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" event={"ID":"e9ab512a-6196-4d62-a32d-b869b3d080bf","Type":"ContainerStarted","Data":"53a7bee1e49260d5b354187c3965a852e57fbd03084f66a2ea30a8b525ed9563"} Feb 19 00:18:12 crc kubenswrapper[4889]: I0219 00:18:12.309361 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" event={"ID":"d7b1bacc-63b6-4446-a2b1-a1306e34f89c","Type":"ContainerStarted","Data":"23aac9fe8087ee192c40d2782c9a3a015719e6c0c95688edf15d89161d68a2b9"} Feb 19 00:18:12 crc kubenswrapper[4889]: I0219 00:18:12.353439 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv" podStartSLOduration=2.935621604 podStartE2EDuration="45.353418974s" podCreationTimestamp="2026-02-19 00:17:27 +0000 UTC" firstStartedPulling="2026-02-19 00:17:29.222711838 +0000 UTC m=+675.187376829" lastFinishedPulling="2026-02-19 00:18:11.640509208 +0000 UTC m=+717.605174199" observedRunningTime="2026-02-19 00:18:12.353106874 +0000 UTC m=+718.317771855" watchObservedRunningTime="2026-02-19 00:18:12.353418974 +0000 UTC m=+718.318083965" Feb 19 00:18:12 crc kubenswrapper[4889]: I0219 00:18:12.353730 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84" podStartSLOduration=2.864522107 podStartE2EDuration="45.353721754s" podCreationTimestamp="2026-02-19 00:17:27 +0000 UTC" firstStartedPulling="2026-02-19 00:17:29.150432814 +0000 UTC m=+675.115097805" lastFinishedPulling="2026-02-19 00:18:11.639632461 +0000 UTC m=+717.604297452" observedRunningTime="2026-02-19 00:18:12.328848964 +0000 UTC m=+718.293513955" watchObservedRunningTime="2026-02-19 00:18:12.353721754 +0000 UTC m=+718.318386745" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.129210 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv"] Feb 19 00:18:13 crc kubenswrapper[4889]: E0219 00:18:13.129531 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="extract" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.129549 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="extract" Feb 19 00:18:13 crc kubenswrapper[4889]: E0219 00:18:13.129557 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="pull" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.129563 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="pull" Feb 19 00:18:13 crc kubenswrapper[4889]: E0219 00:18:13.129582 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="util" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.129590 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="util" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.129714 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c641527-033b-418b-9aba-d399b711acaf" containerName="extract" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.130320 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.133415 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.133449 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.133835 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8wgcx" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.147951 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv"] Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.221659 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/563de8f5-7eb1-443a-9c40-a25bf0d41e4d-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-g8chv\" (UID: \"563de8f5-7eb1-443a-9c40-a25bf0d41e4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.221725 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9r72\" (UniqueName: \"kubernetes.io/projected/563de8f5-7eb1-443a-9c40-a25bf0d41e4d-kube-api-access-z9r72\") pod \"cert-manager-operator-controller-manager-5586865c96-g8chv\" (UID: \"563de8f5-7eb1-443a-9c40-a25bf0d41e4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.323149 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/563de8f5-7eb1-443a-9c40-a25bf0d41e4d-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-g8chv\" (UID: \"563de8f5-7eb1-443a-9c40-a25bf0d41e4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.323258 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9r72\" (UniqueName: \"kubernetes.io/projected/563de8f5-7eb1-443a-9c40-a25bf0d41e4d-kube-api-access-z9r72\") pod \"cert-manager-operator-controller-manager-5586865c96-g8chv\" (UID: \"563de8f5-7eb1-443a-9c40-a25bf0d41e4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.323721 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/563de8f5-7eb1-443a-9c40-a25bf0d41e4d-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-g8chv\" (UID: \"563de8f5-7eb1-443a-9c40-a25bf0d41e4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.579188 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9r72\" (UniqueName: \"kubernetes.io/projected/563de8f5-7eb1-443a-9c40-a25bf0d41e4d-kube-api-access-z9r72\") pod \"cert-manager-operator-controller-manager-5586865c96-g8chv\" (UID: \"563de8f5-7eb1-443a-9c40-a25bf0d41e4d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:13 crc kubenswrapper[4889]: I0219 00:18:13.753373 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" Feb 19 00:18:14 crc kubenswrapper[4889]: I0219 00:18:14.570124 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv"] Feb 19 00:18:14 crc kubenswrapper[4889]: W0219 00:18:14.585405 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563de8f5_7eb1_443a_9c40_a25bf0d41e4d.slice/crio-9545f1307258169eff8f7826af7f74b0b78adc78bd10ea3eaa29e23e51f3d218 WatchSource:0}: Error finding container 9545f1307258169eff8f7826af7f74b0b78adc78bd10ea3eaa29e23e51f3d218: Status 404 returned error can't find the container with id 9545f1307258169eff8f7826af7f74b0b78adc78bd10ea3eaa29e23e51f3d218 Feb 19 00:18:15 crc kubenswrapper[4889]: I0219 00:18:15.522640 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" event={"ID":"563de8f5-7eb1-443a-9c40-a25bf0d41e4d","Type":"ContainerStarted","Data":"9545f1307258169eff8f7826af7f74b0b78adc78bd10ea3eaa29e23e51f3d218"} Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.925635 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.928206 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.931485 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.932048 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.932441 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.932646 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:18:23 crc kubenswrapper[4889]: I0219 00:18:23.932789 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019008 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019071 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019095 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019118 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019338 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019765 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019845 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019900 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.019969 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.020029 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.020059 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.020084 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxdp\" (UniqueName: \"kubernetes.io/projected/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-kube-api-access-wlxdp\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121398 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121460 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121484 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121506 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121532 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121553 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121575 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxdp\" (UniqueName: \"kubernetes.io/projected/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-kube-api-access-wlxdp\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121603 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121632 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121655 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121679 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121705 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.121943 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.122494 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.122549 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.122813 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.123082 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.123141 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.123193 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.123325 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.173828 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.176136 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.192800 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.196014 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxdp\" (UniqueName: \"kubernetes.io/projected/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-kube-api-access-wlxdp\") pod \"service-telemetry-operator-1-build\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: I0219 00:18:24.257032 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:24 crc kubenswrapper[4889]: E0219 00:18:24.984905 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/elasticsearch:7.17.20" Feb 19 00:18:24 crc kubenswrapper[4889]: E0219 00:18:24.985659 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:elastic-internal-init-filesystem,Image:registry.connect.redhat.com/elastic/elasticsearch:7.17.20,Command:[bash -c /mnt/elastic-internal/scripts/prepare-fs.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:HEADLESS_SERVICE_NAME,Value:elasticsearch-es-default,ValueFrom:nil,},EnvVar{Name:PROBE_PASSWORD_PATH,Value:/mnt/elastic-internal/pod-mounted-users/elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:PROBE_USERNAME,Value:elastic-internal-probe,ValueFrom:nil,},EnvVar{Name:READINESS_PROBE_PROTOCOL,Value:https,ValueFrom:nil,},EnvVar{Name:NSS_SDB_USE_CACHE,Value:no,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:downward-api,ReadOnly:true,MountPath:/mnt/elastic-internal/downward-api,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-bin-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-bin-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config,ReadOnly:true,MountPath:/mnt/elastic-internal/elasticsearch-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-config-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-config-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-elasticsearch-plugins-local,ReadOnly:false,MountPath:/mnt/elastic-internal/elasticsearch-plugins-local,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-http-certificates,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/http-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-probe-user,ReadOnly:true,MountPath:/mnt/elastic-internal/pod-mounted-users,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-remote-certificate-authorities,ReadOnly:true,MountPath:/usr/share/elasticsearch/config/transport-remote-certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-scripts,ReadOnly:true,MountPath:/mnt/elastic-internal/scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-transport-certificates,ReadOnly:true,MountPath:/mnt/elastic-internal/transport-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-unicast-hosts,ReadOnly:true,MountPath:/mnt/elastic-internal/unicast-hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elastic-internal-xpack-file-realm,ReadOnly:true,MountPath:/mnt/elastic-internal/xpack-file-realm,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-data,ReadOnly:false,MountPath:/usr/share/elasticsearch/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:elasticsearch-logs,ReadOnly:false,MountPath:/usr/share/elasticsearch/logs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elasticsearch-es-default-0_service-telemetry(ec73ead9-66c6-4de8-8def-ab772839b617): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 00:18:24 crc kubenswrapper[4889]: E0219 00:18:24.986910 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" Feb 19 00:18:25 crc kubenswrapper[4889]: I0219 00:18:25.114484 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:18:25 crc kubenswrapper[4889]: W0219 00:18:25.121823 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bac70c3_e7ac_4cf4_a73f_1bc387fefd23.slice/crio-bf262f9d9ffb6503641725c6fa8fe684ddb9f5d74e8effa5cdbb87db08bb22a6 WatchSource:0}: Error finding container bf262f9d9ffb6503641725c6fa8fe684ddb9f5d74e8effa5cdbb87db08bb22a6: Status 404 returned error can't find the container with id bf262f9d9ffb6503641725c6fa8fe684ddb9f5d74e8effa5cdbb87db08bb22a6 Feb 19 00:18:25 crc kubenswrapper[4889]: I0219 00:18:25.986103 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23","Type":"ContainerStarted","Data":"bf262f9d9ffb6503641725c6fa8fe684ddb9f5d74e8effa5cdbb87db08bb22a6"} Feb 19 00:18:25 crc kubenswrapper[4889]: E0219 00:18:25.988598 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" Feb 19 00:18:26 crc kubenswrapper[4889]: I0219 00:18:26.263882 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:18:26 crc kubenswrapper[4889]: I0219 00:18:26.299666 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:18:26 crc kubenswrapper[4889]: I0219 00:18:26.996727 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" event={"ID":"563de8f5-7eb1-443a-9c40-a25bf0d41e4d","Type":"ContainerStarted","Data":"eac44fa368b963aeee079951a58bb207b769be81642ba44f374d510378c49563"} Feb 19 00:18:27 crc kubenswrapper[4889]: E0219 00:18:27.000634 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" Feb 19 00:18:27 crc kubenswrapper[4889]: I0219 00:18:27.029396 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-g8chv" podStartSLOduration=2.3074183059999998 podStartE2EDuration="14.029378223s" podCreationTimestamp="2026-02-19 00:18:13 +0000 UTC" firstStartedPulling="2026-02-19 00:18:14.60460714 +0000 UTC m=+720.569272131" lastFinishedPulling="2026-02-19 00:18:26.326567057 +0000 UTC m=+732.291232048" observedRunningTime="2026-02-19 00:18:27.026571421 +0000 UTC m=+732.991236422" watchObservedRunningTime="2026-02-19 00:18:27.029378223 +0000 UTC m=+732.994043214" Feb 19 00:18:28 crc kubenswrapper[4889]: E0219 00:18:28.006039 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"elastic-internal-init-filesystem\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/elasticsearch:7.17.20\\\"\"" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.555089 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-jkdvp"] Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.556413 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.558880 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5fjh7" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.559166 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.560468 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.568684 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-jkdvp"] Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.681037 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwc8\" (UniqueName: \"kubernetes.io/projected/0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af-kube-api-access-lbwc8\") pod \"cert-manager-webhook-6888856db4-jkdvp\" (UID: \"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af\") " pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.681582 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-jkdvp\" (UID: \"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af\") " pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.783390 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbwc8\" (UniqueName: \"kubernetes.io/projected/0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af-kube-api-access-lbwc8\") pod \"cert-manager-webhook-6888856db4-jkdvp\" (UID: \"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af\") " pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.783468 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-jkdvp\" (UID: \"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af\") " pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.802585 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-jkdvp\" (UID: \"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af\") " pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.803121 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbwc8\" (UniqueName: \"kubernetes.io/projected/0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af-kube-api-access-lbwc8\") pod \"cert-manager-webhook-6888856db4-jkdvp\" (UID: \"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af\") " pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:33 crc kubenswrapper[4889]: I0219 00:18:33.883820 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.000875 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-wmqht"] Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.002211 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.008045 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-bdtrz" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.024249 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-wmqht"] Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.170811 4889 generic.go:334] "Generic (PLEG): container finished" podID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerID="79a5d36939c926917ae99188ccde99354d4710f849af3d2271836a1c0bf3c3a3" exitCode=0 Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.170943 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23","Type":"ContainerDied","Data":"79a5d36939c926917ae99188ccde99354d4710f849af3d2271836a1c0bf3c3a3"} Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.189177 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60be1caf-5351-4401-b59b-7213eef1a9b0-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-wmqht\" (UID: \"60be1caf-5351-4401-b59b-7213eef1a9b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.189624 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4lvf\" (UniqueName: \"kubernetes.io/projected/60be1caf-5351-4401-b59b-7213eef1a9b0-kube-api-access-p4lvf\") pod \"cert-manager-cainjector-5545bd876-wmqht\" (UID: \"60be1caf-5351-4401-b59b-7213eef1a9b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.204410 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.291588 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4lvf\" (UniqueName: \"kubernetes.io/projected/60be1caf-5351-4401-b59b-7213eef1a9b0-kube-api-access-p4lvf\") pod \"cert-manager-cainjector-5545bd876-wmqht\" (UID: \"60be1caf-5351-4401-b59b-7213eef1a9b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.291756 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60be1caf-5351-4401-b59b-7213eef1a9b0-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-wmqht\" (UID: \"60be1caf-5351-4401-b59b-7213eef1a9b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.313294 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4lvf\" (UniqueName: \"kubernetes.io/projected/60be1caf-5351-4401-b59b-7213eef1a9b0-kube-api-access-p4lvf\") pod \"cert-manager-cainjector-5545bd876-wmqht\" (UID: \"60be1caf-5351-4401-b59b-7213eef1a9b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.313982 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60be1caf-5351-4401-b59b-7213eef1a9b0-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-wmqht\" (UID: \"60be1caf-5351-4401-b59b-7213eef1a9b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.359361 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.390585 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-jkdvp"] Feb 19 00:18:34 crc kubenswrapper[4889]: W0219 00:18:34.394581 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bb9cfc5_52ef_45c0_b3d9_0bbc4885d4af.slice/crio-5c2c4ceb47d996172c7ad9e904931a2870f0e809f7d6117bf8bfa4ce4d541f59 WatchSource:0}: Error finding container 5c2c4ceb47d996172c7ad9e904931a2870f0e809f7d6117bf8bfa4ce4d541f59: Status 404 returned error can't find the container with id 5c2c4ceb47d996172c7ad9e904931a2870f0e809f7d6117bf8bfa4ce4d541f59 Feb 19 00:18:34 crc kubenswrapper[4889]: I0219 00:18:34.599025 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-wmqht"] Feb 19 00:18:34 crc kubenswrapper[4889]: W0219 00:18:34.609316 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60be1caf_5351_4401_b59b_7213eef1a9b0.slice/crio-61dd456871675da4905abc06eb512e7fccf8b61c65d23da3419bbbaf0461203b WatchSource:0}: Error finding container 61dd456871675da4905abc06eb512e7fccf8b61c65d23da3419bbbaf0461203b: Status 404 returned error can't find the container with id 61dd456871675da4905abc06eb512e7fccf8b61c65d23da3419bbbaf0461203b Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.181900 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" event={"ID":"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af","Type":"ContainerStarted","Data":"5c2c4ceb47d996172c7ad9e904931a2870f0e809f7d6117bf8bfa4ce4d541f59"} Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.185726 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerName="docker-build" containerID="cri-o://f7a1fe57fa94c1220a2719d6c0343e21bbf9bbcc85bfb2e5c432bdd3c88c7987" gracePeriod=30 Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.186264 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23","Type":"ContainerStarted","Data":"f7a1fe57fa94c1220a2719d6c0343e21bbf9bbcc85bfb2e5c432bdd3c88c7987"} Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.191507 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" event={"ID":"60be1caf-5351-4401-b59b-7213eef1a9b0","Type":"ContainerStarted","Data":"61dd456871675da4905abc06eb512e7fccf8b61c65d23da3419bbbaf0461203b"} Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.219430 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=3.757333268 podStartE2EDuration="12.219406007s" podCreationTimestamp="2026-02-19 00:18:23 +0000 UTC" firstStartedPulling="2026-02-19 00:18:25.124763978 +0000 UTC m=+731.089428969" lastFinishedPulling="2026-02-19 00:18:33.586836717 +0000 UTC m=+739.551501708" observedRunningTime="2026-02-19 00:18:35.219057455 +0000 UTC m=+741.183722456" watchObservedRunningTime="2026-02-19 00:18:35.219406007 +0000 UTC m=+741.184070998" Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.914542 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.915751 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.918075 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.918503 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.920511 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Feb 19 00:18:35 crc kubenswrapper[4889]: I0219 00:18:35.945956 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.021982 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022060 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022083 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022113 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022145 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022185 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022248 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwwp\" (UniqueName: \"kubernetes.io/projected/b198e444-f787-4a92-b0eb-baa40ecfc16c-kube-api-access-snwwp\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022296 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022325 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022443 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022471 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.022503 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.123994 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124052 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124084 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124113 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124140 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124160 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124182 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124212 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124250 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124389 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124896 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.124991 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125092 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125136 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125429 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125597 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125655 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwwp\" (UniqueName: \"kubernetes.io/projected/b198e444-f787-4a92-b0eb-baa40ecfc16c-kube-api-access-snwwp\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.125737 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.126186 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.126689 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.131824 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.131814 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.149497 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwwp\" (UniqueName: \"kubernetes.io/projected/b198e444-f787-4a92-b0eb-baa40ecfc16c-kube-api-access-snwwp\") pod \"service-telemetry-operator-2-build\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:36 crc kubenswrapper[4889]: I0219 00:18:36.233039 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:39 crc kubenswrapper[4889]: I0219 00:18:39.710883 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:18:40 crc kubenswrapper[4889]: I0219 00:18:40.950953 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_3bac70c3-e7ac-4cf4-a73f-1bc387fefd23/docker-build/0.log" Feb 19 00:18:40 crc kubenswrapper[4889]: I0219 00:18:40.952819 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.048021 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_3bac70c3-e7ac-4cf4-a73f-1bc387fefd23/docker-build/0.log" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.048710 4889 generic.go:334] "Generic (PLEG): container finished" podID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerID="f7a1fe57fa94c1220a2719d6c0343e21bbf9bbcc85bfb2e5c432bdd3c88c7987" exitCode=1 Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.048786 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23","Type":"ContainerDied","Data":"f7a1fe57fa94c1220a2719d6c0343e21bbf9bbcc85bfb2e5c432bdd3c88c7987"} Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.048910 4889 scope.go:117] "RemoveContainer" containerID="f7a1fe57fa94c1220a2719d6c0343e21bbf9bbcc85bfb2e5c432bdd3c88c7987" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.070054 4889 scope.go:117] "RemoveContainer" containerID="79a5d36939c926917ae99188ccde99354d4710f849af3d2271836a1c0bf3c3a3" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104459 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-root\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104586 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-pull\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104654 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-system-configs\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104795 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildcachedir\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104904 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlxdp\" (UniqueName: \"kubernetes.io/projected/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-kube-api-access-wlxdp\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104990 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-proxy-ca-bundles\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105060 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-ca-bundles\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105178 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-run\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105259 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-node-pullsecrets\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105379 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-blob-cache\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105414 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildworkdir\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105567 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-push\") pod \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\" (UID: \"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23\") " Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.104985 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.105684 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106122 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106383 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106576 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106657 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106689 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106704 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106715 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.106726 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.107099 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.107456 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.108025 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.108410 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.113085 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-kube-api-access-wlxdp" (OuterVolumeSpecName: "kube-api-access-wlxdp") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "kube-api-access-wlxdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.113403 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.116391 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" (UID: "3bac70c3-e7ac-4cf4-a73f-1bc387fefd23"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.208955 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.211259 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.211281 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.211320 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.211338 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.211353 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:41 crc kubenswrapper[4889]: I0219 00:18:41.211367 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlxdp\" (UniqueName: \"kubernetes.io/projected/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23-kube-api-access-wlxdp\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.058010 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b198e444-f787-4a92-b0eb-baa40ecfc16c","Type":"ContainerStarted","Data":"8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651"} Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.058810 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b198e444-f787-4a92-b0eb-baa40ecfc16c","Type":"ContainerStarted","Data":"830be3dc84e64e21da21c66d0ead6a6a059ea081af9b9842684622df3f286371"} Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.059837 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.059878 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"3bac70c3-e7ac-4cf4-a73f-1bc387fefd23","Type":"ContainerDied","Data":"bf262f9d9ffb6503641725c6fa8fe684ddb9f5d74e8effa5cdbb87db08bb22a6"} Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.062799 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ec73ead9-66c6-4de8-8def-ab772839b617","Type":"ContainerStarted","Data":"f6533e077a11e38d401baa91bc2d2492d4f40987b5726a96e964b82b352239d5"} Feb 19 00:18:42 crc kubenswrapper[4889]: E0219 00:18:42.154548 4889 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=4714509761814278372, SKID=, AKID=BB:37:CA:44:E4:AB:98:E3:97:BA:55:4F:36:BA:47:8E:DB:6F:1E:BA failed: x509: certificate signed by unknown authority" Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.177353 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.185544 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 19 00:18:42 crc kubenswrapper[4889]: I0219 00:18:42.736285 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" path="/var/lib/kubelet/pods/3bac70c3-e7ac-4cf4-a73f-1bc387fefd23/volumes" Feb 19 00:18:43 crc kubenswrapper[4889]: I0219 00:18:43.194530 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.075081 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="b198e444-f787-4a92-b0eb-baa40ecfc16c" containerName="git-clone" containerID="cri-o://8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651" gracePeriod=30 Feb 19 00:18:44 crc kubenswrapper[4889]: E0219 00:18:44.334295 4889 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb198e444_f787_4a92_b0eb_baa40ecfc16c.slice/crio-8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb198e444_f787_4a92_b0eb_baa40ecfc16c.slice/crio-conmon-8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651.scope\": RecentStats: unable to find data in memory cache]" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.817256 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_b198e444-f787-4a92-b0eb-baa40ecfc16c/git-clone/0.log" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.818177 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.951264 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-blob-cache\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.951372 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-ca-bundles\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.951447 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-push\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953119 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-pull\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953175 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snwwp\" (UniqueName: \"kubernetes.io/projected/b198e444-f787-4a92-b0eb-baa40ecfc16c-kube-api-access-snwwp\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953209 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildworkdir\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953251 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildcachedir\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953286 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-run\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953318 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-system-configs\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953348 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-node-pullsecrets\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953381 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-proxy-ca-bundles\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.953404 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-root\") pod \"b198e444-f787-4a92-b0eb-baa40ecfc16c\" (UID: \"b198e444-f787-4a92-b0eb-baa40ecfc16c\") " Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.954014 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.957357 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.957714 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.960283 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.960413 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.960425 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.960674 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.961015 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.961109 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.961196 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.961454 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:18:44 crc kubenswrapper[4889]: I0219 00:18:44.965452 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b198e444-f787-4a92-b0eb-baa40ecfc16c-kube-api-access-snwwp" (OuterVolumeSpecName: "kube-api-access-snwwp") pod "b198e444-f787-4a92-b0eb-baa40ecfc16c" (UID: "b198e444-f787-4a92-b0eb-baa40ecfc16c"). InnerVolumeSpecName "kube-api-access-snwwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054682 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054734 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054749 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054765 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/b198e444-f787-4a92-b0eb-baa40ecfc16c-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054776 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snwwp\" (UniqueName: \"kubernetes.io/projected/b198e444-f787-4a92-b0eb-baa40ecfc16c-kube-api-access-snwwp\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054790 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054803 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054822 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054836 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054848 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b198e444-f787-4a92-b0eb-baa40ecfc16c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054861 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b198e444-f787-4a92-b0eb-baa40ecfc16c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.054876 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b198e444-f787-4a92-b0eb-baa40ecfc16c-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.083634 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" event={"ID":"60be1caf-5351-4401-b59b-7213eef1a9b0","Type":"ContainerStarted","Data":"5e5e5d26524fe57e6f43a81490329fa6209228c0b4af694ceae67b0e26aac234"} Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.085447 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_b198e444-f787-4a92-b0eb-baa40ecfc16c/git-clone/0.log" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.085501 4889 generic.go:334] "Generic (PLEG): container finished" podID="b198e444-f787-4a92-b0eb-baa40ecfc16c" containerID="8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651" exitCode=1 Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.085572 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b198e444-f787-4a92-b0eb-baa40ecfc16c","Type":"ContainerDied","Data":"8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651"} Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.085636 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"b198e444-f787-4a92-b0eb-baa40ecfc16c","Type":"ContainerDied","Data":"830be3dc84e64e21da21c66d0ead6a6a059ea081af9b9842684622df3f286371"} Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.085695 4889 scope.go:117] "RemoveContainer" containerID="8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.085919 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.122766 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" event={"ID":"0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af","Type":"ContainerStarted","Data":"cdcec07500621e07ecd99e22f8f26df1742f87a6050ea901f9a852a58a31a82f"} Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.124520 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.128368 4889 scope.go:117] "RemoveContainer" containerID="8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651" Feb 19 00:18:45 crc kubenswrapper[4889]: E0219 00:18:45.129911 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651\": container with ID starting with 8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651 not found: ID does not exist" containerID="8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.129960 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651"} err="failed to get container status \"8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651\": rpc error: code = NotFound desc = could not find container \"8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651\": container with ID starting with 8157e0268db3d685a4d7fbe4ec921444e2693712a4274c7acff809903869e651 not found: ID does not exist" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.148236 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-wmqht" podStartSLOduration=2.230557149 podStartE2EDuration="12.148199066s" podCreationTimestamp="2026-02-19 00:18:33 +0000 UTC" firstStartedPulling="2026-02-19 00:18:34.612501462 +0000 UTC m=+740.577166453" lastFinishedPulling="2026-02-19 00:18:44.530143379 +0000 UTC m=+750.494808370" observedRunningTime="2026-02-19 00:18:45.121780207 +0000 UTC m=+751.086445218" watchObservedRunningTime="2026-02-19 00:18:45.148199066 +0000 UTC m=+751.112864057" Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.155817 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.162726 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 19 00:18:45 crc kubenswrapper[4889]: I0219 00:18:45.339720 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" podStartSLOduration=2.250997294 podStartE2EDuration="12.339691349s" podCreationTimestamp="2026-02-19 00:18:33 +0000 UTC" firstStartedPulling="2026-02-19 00:18:34.397726071 +0000 UTC m=+740.362391062" lastFinishedPulling="2026-02-19 00:18:44.486420126 +0000 UTC m=+750.451085117" observedRunningTime="2026-02-19 00:18:45.337375484 +0000 UTC m=+751.302040485" watchObservedRunningTime="2026-02-19 00:18:45.339691349 +0000 UTC m=+751.304356360" Feb 19 00:18:46 crc kubenswrapper[4889]: I0219 00:18:46.136058 4889 generic.go:334] "Generic (PLEG): container finished" podID="ec73ead9-66c6-4de8-8def-ab772839b617" containerID="f6533e077a11e38d401baa91bc2d2492d4f40987b5726a96e964b82b352239d5" exitCode=0 Feb 19 00:18:46 crc kubenswrapper[4889]: I0219 00:18:46.136135 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ec73ead9-66c6-4de8-8def-ab772839b617","Type":"ContainerDied","Data":"f6533e077a11e38d401baa91bc2d2492d4f40987b5726a96e964b82b352239d5"} Feb 19 00:18:46 crc kubenswrapper[4889]: I0219 00:18:46.733466 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b198e444-f787-4a92-b0eb-baa40ecfc16c" path="/var/lib/kubelet/pods/b198e444-f787-4a92-b0eb-baa40ecfc16c/volumes" Feb 19 00:18:47 crc kubenswrapper[4889]: I0219 00:18:47.154003 4889 generic.go:334] "Generic (PLEG): container finished" podID="ec73ead9-66c6-4de8-8def-ab772839b617" containerID="3daa4923edbe84977d4d5adf8f7fbf2d5823fb8ed68794235e64585c6ccde464" exitCode=0 Feb 19 00:18:47 crc kubenswrapper[4889]: I0219 00:18:47.154056 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ec73ead9-66c6-4de8-8def-ab772839b617","Type":"ContainerDied","Data":"3daa4923edbe84977d4d5adf8f7fbf2d5823fb8ed68794235e64585c6ccde464"} Feb 19 00:18:48 crc kubenswrapper[4889]: I0219 00:18:48.166376 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ec73ead9-66c6-4de8-8def-ab772839b617","Type":"ContainerStarted","Data":"8a376fd6c4c7e29d2f65639644cff684e4d0b36d695923ebaccd19d70e004362"} Feb 19 00:18:48 crc kubenswrapper[4889]: I0219 00:18:48.166715 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:18:48 crc kubenswrapper[4889]: I0219 00:18:48.205443 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=13.819185129 podStartE2EDuration="52.205421928s" podCreationTimestamp="2026-02-19 00:17:56 +0000 UTC" firstStartedPulling="2026-02-19 00:18:02.387304553 +0000 UTC m=+708.351969584" lastFinishedPulling="2026-02-19 00:18:40.773541392 +0000 UTC m=+746.738206383" observedRunningTime="2026-02-19 00:18:48.201521921 +0000 UTC m=+754.166186912" watchObservedRunningTime="2026-02-19 00:18:48.205421928 +0000 UTC m=+754.170086939" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.365552 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-t42lr"] Feb 19 00:18:50 crc kubenswrapper[4889]: E0219 00:18:50.366352 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b198e444-f787-4a92-b0eb-baa40ecfc16c" containerName="git-clone" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.366368 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b198e444-f787-4a92-b0eb-baa40ecfc16c" containerName="git-clone" Feb 19 00:18:50 crc kubenswrapper[4889]: E0219 00:18:50.366383 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerName="docker-build" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.366390 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerName="docker-build" Feb 19 00:18:50 crc kubenswrapper[4889]: E0219 00:18:50.366405 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerName="manage-dockerfile" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.366414 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerName="manage-dockerfile" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.366543 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b198e444-f787-4a92-b0eb-baa40ecfc16c" containerName="git-clone" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.366560 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bac70c3-e7ac-4cf4-a73f-1bc387fefd23" containerName="docker-build" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.367097 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.369725 4889 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-ldwm6" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.444115 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-t42lr"] Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.495880 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7t88\" (UniqueName: \"kubernetes.io/projected/8d3484ae-9bcc-4f3a-8b10-308202ec491e-kube-api-access-k7t88\") pod \"cert-manager-545d4d4674-t42lr\" (UID: \"8d3484ae-9bcc-4f3a-8b10-308202ec491e\") " pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.495936 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d3484ae-9bcc-4f3a-8b10-308202ec491e-bound-sa-token\") pod \"cert-manager-545d4d4674-t42lr\" (UID: \"8d3484ae-9bcc-4f3a-8b10-308202ec491e\") " pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.597382 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7t88\" (UniqueName: \"kubernetes.io/projected/8d3484ae-9bcc-4f3a-8b10-308202ec491e-kube-api-access-k7t88\") pod \"cert-manager-545d4d4674-t42lr\" (UID: \"8d3484ae-9bcc-4f3a-8b10-308202ec491e\") " pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.597474 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d3484ae-9bcc-4f3a-8b10-308202ec491e-bound-sa-token\") pod \"cert-manager-545d4d4674-t42lr\" (UID: \"8d3484ae-9bcc-4f3a-8b10-308202ec491e\") " pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.623055 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7t88\" (UniqueName: \"kubernetes.io/projected/8d3484ae-9bcc-4f3a-8b10-308202ec491e-kube-api-access-k7t88\") pod \"cert-manager-545d4d4674-t42lr\" (UID: \"8d3484ae-9bcc-4f3a-8b10-308202ec491e\") " pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.623490 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d3484ae-9bcc-4f3a-8b10-308202ec491e-bound-sa-token\") pod \"cert-manager-545d4d4674-t42lr\" (UID: \"8d3484ae-9bcc-4f3a-8b10-308202ec491e\") " pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.755440 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-t42lr" Feb 19 00:18:50 crc kubenswrapper[4889]: I0219 00:18:50.988237 4889 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:18:51 crc kubenswrapper[4889]: I0219 00:18:51.407183 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-t42lr"] Feb 19 00:18:52 crc kubenswrapper[4889]: I0219 00:18:52.196849 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-t42lr" event={"ID":"8d3484ae-9bcc-4f3a-8b10-308202ec491e","Type":"ContainerStarted","Data":"1710414f0992ad4d278255194fa040e65669438ff85bb328889325944d815ce6"} Feb 19 00:18:52 crc kubenswrapper[4889]: I0219 00:18:52.197510 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-t42lr" event={"ID":"8d3484ae-9bcc-4f3a-8b10-308202ec491e","Type":"ContainerStarted","Data":"54986648fa721232429f5986f95bf95da1bb49e02683a3b08987eef5444da42a"} Feb 19 00:18:52 crc kubenswrapper[4889]: I0219 00:18:52.228657 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-t42lr" podStartSLOduration=2.228626712 podStartE2EDuration="2.228626712s" podCreationTimestamp="2026-02-19 00:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:18:52.222198564 +0000 UTC m=+758.186863565" watchObservedRunningTime="2026-02-19 00:18:52.228626712 +0000 UTC m=+758.193291703" Feb 19 00:18:53 crc kubenswrapper[4889]: I0219 00:18:53.889059 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-jkdvp" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.626985 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.628079 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.630806 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-sys-config" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.631159 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.632733 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-global-ca" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.636500 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-ca" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.668983 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.744525 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.744642 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.744976 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745170 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745402 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745533 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745578 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745674 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745772 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745900 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.745981 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.746057 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9k5c\" (UniqueName: \"kubernetes.io/projected/a5873c75-8d24-42c1-9562-748b778f81a9-kube-api-access-q9k5c\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847155 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847254 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847295 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847323 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847353 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847386 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847438 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847473 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847495 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9k5c\" (UniqueName: \"kubernetes.io/projected/a5873c75-8d24-42c1-9562-748b778f81a9-kube-api-access-q9k5c\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847530 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847559 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847608 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.847626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.848000 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.848191 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.848634 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.848743 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.848871 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.848939 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.849487 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.852521 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.855707 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.855875 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-push\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.874880 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9k5c\" (UniqueName: \"kubernetes.io/projected/a5873c75-8d24-42c1-9562-748b778f81a9-kube-api-access-q9k5c\") pod \"service-telemetry-operator-3-build\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:54 crc kubenswrapper[4889]: I0219 00:18:54.950903 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:18:55 crc kubenswrapper[4889]: I0219 00:18:55.973494 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 19 00:18:56 crc kubenswrapper[4889]: I0219 00:18:56.491364 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerStarted","Data":"c8239523dcfa355fc34c700f46b5f93fd55bf1ebbfbe546d5dbdcbf89bdb8bfb"} Feb 19 00:18:56 crc kubenswrapper[4889]: I0219 00:18:56.491879 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerStarted","Data":"baf2da18977ddc6d277c607247d61f653ae76e180875cfad345c85d2646ec96d"} Feb 19 00:18:57 crc kubenswrapper[4889]: I0219 00:18:57.239381 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:18:57 crc kubenswrapper[4889]: {"timestamp": "2026-02-19T00:18:57+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:18:57 crc kubenswrapper[4889]: > Feb 19 00:19:02 crc kubenswrapper[4889]: I0219 00:19:02.312986 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:19:02 crc kubenswrapper[4889]: {"timestamp": "2026-02-19T00:19:02+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:19:02 crc kubenswrapper[4889]: > Feb 19 00:19:05 crc kubenswrapper[4889]: I0219 00:19:05.556532 4889 generic.go:334] "Generic (PLEG): container finished" podID="a5873c75-8d24-42c1-9562-748b778f81a9" containerID="c8239523dcfa355fc34c700f46b5f93fd55bf1ebbfbe546d5dbdcbf89bdb8bfb" exitCode=0 Feb 19 00:19:05 crc kubenswrapper[4889]: I0219 00:19:05.556666 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerDied","Data":"c8239523dcfa355fc34c700f46b5f93fd55bf1ebbfbe546d5dbdcbf89bdb8bfb"} Feb 19 00:19:06 crc kubenswrapper[4889]: I0219 00:19:06.569491 4889 generic.go:334] "Generic (PLEG): container finished" podID="a5873c75-8d24-42c1-9562-748b778f81a9" containerID="a6f24a5d698aa63eb1f9a3ca29da06e9138112cab58c731bdc976b759fd58bcb" exitCode=0 Feb 19 00:19:06 crc kubenswrapper[4889]: I0219 00:19:06.569574 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerDied","Data":"a6f24a5d698aa63eb1f9a3ca29da06e9138112cab58c731bdc976b759fd58bcb"} Feb 19 00:19:06 crc kubenswrapper[4889]: I0219 00:19:06.631650 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_a5873c75-8d24-42c1-9562-748b778f81a9/manage-dockerfile/0.log" Feb 19 00:19:07 crc kubenswrapper[4889]: I0219 00:19:07.220764 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:19:07 crc kubenswrapper[4889]: {"timestamp": "2026-02-19T00:19:07+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:19:07 crc kubenswrapper[4889]: > Feb 19 00:19:07 crc kubenswrapper[4889]: I0219 00:19:07.587120 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerStarted","Data":"8ad27858d2dbdf3d96a39926ddb136c02e85567e83102d01b76bc6cb56f701e1"} Feb 19 00:19:07 crc kubenswrapper[4889]: I0219 00:19:07.613331 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-3-build" podStartSLOduration=13.613312461 podStartE2EDuration="13.613312461s" podCreationTimestamp="2026-02-19 00:18:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:19:07.611583095 +0000 UTC m=+773.576248096" watchObservedRunningTime="2026-02-19 00:19:07.613312461 +0000 UTC m=+773.577977452" Feb 19 00:19:07 crc kubenswrapper[4889]: I0219 00:19:07.781991 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:19:07 crc kubenswrapper[4889]: I0219 00:19:07.782067 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:19:12 crc kubenswrapper[4889]: I0219 00:19:12.237465 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:19:12 crc kubenswrapper[4889]: {"timestamp": "2026-02-19T00:19:12+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:19:12 crc kubenswrapper[4889]: > Feb 19 00:19:17 crc kubenswrapper[4889]: I0219 00:19:17.228877 4889 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ec73ead9-66c6-4de8-8def-ab772839b617" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:19:17 crc kubenswrapper[4889]: {"timestamp": "2026-02-19T00:19:17+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:19:17 crc kubenswrapper[4889]: > Feb 19 00:19:23 crc kubenswrapper[4889]: I0219 00:19:23.070173 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:19:37 crc kubenswrapper[4889]: I0219 00:19:37.781354 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:19:37 crc kubenswrapper[4889]: I0219 00:19:37.782063 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:20:07 crc kubenswrapper[4889]: I0219 00:20:07.781695 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:20:07 crc kubenswrapper[4889]: I0219 00:20:07.782795 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:20:07 crc kubenswrapper[4889]: I0219 00:20:07.782870 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:20:07 crc kubenswrapper[4889]: I0219 00:20:07.783850 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"307e06c1533d97412f05ea0d1d6dcd394440c81bd6cadf9dd2c0ed35a7f70904"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:20:07 crc kubenswrapper[4889]: I0219 00:20:07.783959 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://307e06c1533d97412f05ea0d1d6dcd394440c81bd6cadf9dd2c0ed35a7f70904" gracePeriod=600 Feb 19 00:20:08 crc kubenswrapper[4889]: I0219 00:20:08.688674 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="307e06c1533d97412f05ea0d1d6dcd394440c81bd6cadf9dd2c0ed35a7f70904" exitCode=0 Feb 19 00:20:08 crc kubenswrapper[4889]: I0219 00:20:08.688767 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"307e06c1533d97412f05ea0d1d6dcd394440c81bd6cadf9dd2c0ed35a7f70904"} Feb 19 00:20:08 crc kubenswrapper[4889]: I0219 00:20:08.689413 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"420ed3c300f26e608399f169c685b0129d5ac56bdcf0a85ef3838579d162572e"} Feb 19 00:20:08 crc kubenswrapper[4889]: I0219 00:20:08.689440 4889 scope.go:117] "RemoveContainer" containerID="5344f45dff8eecfe6b1adf4750c6d3e2ab8740bd4015db965625ea2ee833f1a6" Feb 19 00:20:50 crc kubenswrapper[4889]: I0219 00:20:50.002181 4889 generic.go:334] "Generic (PLEG): container finished" podID="a5873c75-8d24-42c1-9562-748b778f81a9" containerID="8ad27858d2dbdf3d96a39926ddb136c02e85567e83102d01b76bc6cb56f701e1" exitCode=0 Feb 19 00:20:50 crc kubenswrapper[4889]: I0219 00:20:50.002341 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerDied","Data":"8ad27858d2dbdf3d96a39926ddb136c02e85567e83102d01b76bc6cb56f701e1"} Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.318965 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440413 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-buildcachedir\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440513 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-build-blob-cache\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440546 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-root\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440572 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440659 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-pull\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440704 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-system-configs\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440733 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-push\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440762 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-proxy-ca-bundles\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441168 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.440796 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-node-pullsecrets\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441545 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-run\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441598 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-buildworkdir\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441662 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9k5c\" (UniqueName: \"kubernetes.io/projected/a5873c75-8d24-42c1-9562-748b778f81a9-kube-api-access-q9k5c\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441726 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-ca-bundles\") pod \"a5873c75-8d24-42c1-9562-748b778f81a9\" (UID: \"a5873c75-8d24-42c1-9562-748b778f81a9\") " Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441878 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.441952 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.442154 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.442188 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.442202 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a5873c75-8d24-42c1-9562-748b778f81a9-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.442235 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.443089 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.443400 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.448085 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5873c75-8d24-42c1-9562-748b778f81a9-kube-api-access-q9k5c" (OuterVolumeSpecName: "kube-api-access-q9k5c") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "kube-api-access-q9k5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.448713 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.448954 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.476596 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.543430 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a5873c75-8d24-42c1-9562-748b778f81a9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.543476 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.543492 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a5873c75-8d24-42c1-9562-748b778f81a9-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.543504 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.543513 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.543522 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9k5c\" (UniqueName: \"kubernetes.io/projected/a5873c75-8d24-42c1-9562-748b778f81a9-kube-api-access-q9k5c\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.613064 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:51 crc kubenswrapper[4889]: I0219 00:20:51.645118 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:52 crc kubenswrapper[4889]: I0219 00:20:52.021242 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a5873c75-8d24-42c1-9562-748b778f81a9","Type":"ContainerDied","Data":"baf2da18977ddc6d277c607247d61f653ae76e180875cfad345c85d2646ec96d"} Feb 19 00:20:52 crc kubenswrapper[4889]: I0219 00:20:52.021332 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baf2da18977ddc6d277c607247d61f653ae76e180875cfad345c85d2646ec96d" Feb 19 00:20:52 crc kubenswrapper[4889]: I0219 00:20:52.021329 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 19 00:20:53 crc kubenswrapper[4889]: I0219 00:20:53.278679 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a5873c75-8d24-42c1-9562-748b778f81a9" (UID: "a5873c75-8d24-42c1-9562-748b778f81a9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:20:53 crc kubenswrapper[4889]: I0219 00:20:53.373156 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a5873c75-8d24-42c1-9562-748b778f81a9-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.460820 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:20:55 crc kubenswrapper[4889]: E0219 00:20:55.461209 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="manage-dockerfile" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.461245 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="manage-dockerfile" Feb 19 00:20:55 crc kubenswrapper[4889]: E0219 00:20:55.461270 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="docker-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.461277 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="docker-build" Feb 19 00:20:55 crc kubenswrapper[4889]: E0219 00:20:55.461287 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="git-clone" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.461344 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="git-clone" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.461510 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5873c75-8d24-42c1-9562-748b778f81a9" containerName="docker-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.463111 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.466451 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.469414 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.469734 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.475486 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.478992 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503328 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503367 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503397 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503417 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503434 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503450 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503542 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503575 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503632 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503661 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503687 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.503706 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrw7\" (UniqueName: \"kubernetes.io/projected/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-kube-api-access-4lrw7\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.604792 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.604853 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.604899 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.604933 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.604942 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.604964 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605230 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrw7\" (UniqueName: \"kubernetes.io/projected/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-kube-api-access-4lrw7\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605304 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605062 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605357 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605417 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605418 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605462 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605498 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605533 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605681 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.605995 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.606304 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.606356 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.606462 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.607079 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.610971 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-push\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.610987 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.624136 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lrw7\" (UniqueName: \"kubernetes.io/projected/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-kube-api-access-4lrw7\") pod \"smart-gateway-operator-1-build\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:55 crc kubenswrapper[4889]: I0219 00:20:55.842718 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:20:56 crc kubenswrapper[4889]: I0219 00:20:56.074135 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:20:57 crc kubenswrapper[4889]: I0219 00:20:57.061665 4889 generic.go:334] "Generic (PLEG): container finished" podID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerID="6724612c5c22a8cd3005189e59a8968738f4cde4b58d2591a3bee4e02019cc26" exitCode=0 Feb 19 00:20:57 crc kubenswrapper[4889]: I0219 00:20:57.061729 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b","Type":"ContainerDied","Data":"6724612c5c22a8cd3005189e59a8968738f4cde4b58d2591a3bee4e02019cc26"} Feb 19 00:20:57 crc kubenswrapper[4889]: I0219 00:20:57.062107 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b","Type":"ContainerStarted","Data":"57599dcb4bbfc48b4b82396d52316b05ab6d9d43e407553cf81fbee02ce849c4"} Feb 19 00:20:58 crc kubenswrapper[4889]: I0219 00:20:58.082159 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b","Type":"ContainerStarted","Data":"58dc98d6558b3075205056780072043aece7ed5e6442d4af5456f77cbf81880c"} Feb 19 00:20:58 crc kubenswrapper[4889]: I0219 00:20:58.111053 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.11103297 podStartE2EDuration="3.11103297s" podCreationTimestamp="2026-02-19 00:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:20:58.109144584 +0000 UTC m=+884.073809575" watchObservedRunningTime="2026-02-19 00:20:58.11103297 +0000 UTC m=+884.075697971" Feb 19 00:21:05 crc kubenswrapper[4889]: I0219 00:21:05.985185 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:21:05 crc kubenswrapper[4889]: I0219 00:21:05.986412 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerName="docker-build" containerID="cri-o://58dc98d6558b3075205056780072043aece7ed5e6442d4af5456f77cbf81880c" gracePeriod=30 Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.611893 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.623455 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.630565 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.632047 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.632127 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.648276 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749097 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749260 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749334 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749427 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749463 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpkb\" (UniqueName: \"kubernetes.io/projected/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-kube-api-access-4gpkb\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749489 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749532 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749571 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749590 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749627 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749650 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.749680 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.851903 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.851970 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852004 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852042 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852090 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852123 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852159 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852181 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gpkb\" (UniqueName: \"kubernetes.io/projected/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-kube-api-access-4gpkb\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852227 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852255 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852282 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852275 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.852298 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.853009 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.853156 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.853278 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.853648 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.853704 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.853860 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.854098 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.854351 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.862765 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.863585 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-push\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.968980 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gpkb\" (UniqueName: \"kubernetes.io/projected/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-kube-api-access-4gpkb\") pod \"smart-gateway-operator-2-build\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:07 crc kubenswrapper[4889]: I0219 00:21:07.969367 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.160907 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b/docker-build/0.log" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.174970 4889 generic.go:334] "Generic (PLEG): container finished" podID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerID="58dc98d6558b3075205056780072043aece7ed5e6442d4af5456f77cbf81880c" exitCode=1 Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.175018 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b","Type":"ContainerDied","Data":"58dc98d6558b3075205056780072043aece7ed5e6442d4af5456f77cbf81880c"} Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.239900 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b/docker-build/0.log" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.240602 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376026 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-blob-cache\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376107 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lrw7\" (UniqueName: \"kubernetes.io/projected/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-kube-api-access-4lrw7\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376167 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-node-pullsecrets\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376276 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-root\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376330 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-push\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376395 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-run\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376491 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-pull\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376537 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildcachedir\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376589 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-proxy-ca-bundles\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376630 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildworkdir\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376666 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-system-configs\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.376706 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-ca-bundles\") pod \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\" (UID: \"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b\") " Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.378032 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.379396 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.379419 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.380020 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.380281 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.380462 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.380899 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.382721 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-kube-api-access-4lrw7" (OuterVolumeSpecName: "kube-api-access-4lrw7") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "kube-api-access-4lrw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.383182 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.384388 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.420164 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478789 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478849 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478869 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478887 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478904 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lrw7\" (UniqueName: \"kubernetes.io/projected/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-kube-api-access-4lrw7\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478922 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478938 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478956 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478973 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.478990 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.807842 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:08 crc kubenswrapper[4889]: I0219 00:21:08.883195 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.183685 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerStarted","Data":"c606ca59a3f6b496401830472a97cf295378237e3962b89699695c0e7a037765"} Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.183741 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerStarted","Data":"82a75c72f0c81669f11b43428f82d0cb9d21b2b0fecf83f5038947d6b2cf8630"} Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.186147 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b/docker-build/0.log" Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.186509 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b","Type":"ContainerDied","Data":"57599dcb4bbfc48b4b82396d52316b05ab6d9d43e407553cf81fbee02ce849c4"} Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.186548 4889 scope.go:117] "RemoveContainer" containerID="58dc98d6558b3075205056780072043aece7ed5e6442d4af5456f77cbf81880c" Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.186563 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Feb 19 00:21:09 crc kubenswrapper[4889]: I0219 00:21:09.261706 4889 scope.go:117] "RemoveContainer" containerID="6724612c5c22a8cd3005189e59a8968738f4cde4b58d2591a3bee4e02019cc26" Feb 19 00:21:10 crc kubenswrapper[4889]: I0219 00:21:10.291637 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" (UID: "9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:21:10 crc kubenswrapper[4889]: I0219 00:21:10.303211 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:10 crc kubenswrapper[4889]: I0219 00:21:10.417782 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:21:10 crc kubenswrapper[4889]: I0219 00:21:10.421700 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Feb 19 00:21:10 crc kubenswrapper[4889]: I0219 00:21:10.732144 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" path="/var/lib/kubelet/pods/9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b/volumes" Feb 19 00:21:11 crc kubenswrapper[4889]: I0219 00:21:11.205516 4889 generic.go:334] "Generic (PLEG): container finished" podID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerID="c606ca59a3f6b496401830472a97cf295378237e3962b89699695c0e7a037765" exitCode=0 Feb 19 00:21:11 crc kubenswrapper[4889]: I0219 00:21:11.205848 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerDied","Data":"c606ca59a3f6b496401830472a97cf295378237e3962b89699695c0e7a037765"} Feb 19 00:21:12 crc kubenswrapper[4889]: I0219 00:21:12.215585 4889 generic.go:334] "Generic (PLEG): container finished" podID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerID="896f26d452efe52b6637affff66975a657345f4c79f950cedf224c182060c683" exitCode=0 Feb 19 00:21:12 crc kubenswrapper[4889]: I0219 00:21:12.215700 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerDied","Data":"896f26d452efe52b6637affff66975a657345f4c79f950cedf224c182060c683"} Feb 19 00:21:12 crc kubenswrapper[4889]: I0219 00:21:12.268258 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_f15e83b1-a9b2-472f-b596-dc3dc3c2257f/manage-dockerfile/0.log" Feb 19 00:21:13 crc kubenswrapper[4889]: I0219 00:21:13.225260 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerStarted","Data":"d66d2085cdf4ae907e5c6a18449ae98e3dc1c1fc288d16a861845bb78cb95fb1"} Feb 19 00:21:13 crc kubenswrapper[4889]: I0219 00:21:13.256702 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=6.256680599 podStartE2EDuration="6.256680599s" podCreationTimestamp="2026-02-19 00:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:21:13.253865354 +0000 UTC m=+899.218530345" watchObservedRunningTime="2026-02-19 00:21:13.256680599 +0000 UTC m=+899.221345590" Feb 19 00:22:22 crc kubenswrapper[4889]: I0219 00:22:22.745308 4889 generic.go:334] "Generic (PLEG): container finished" podID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerID="d66d2085cdf4ae907e5c6a18449ae98e3dc1c1fc288d16a861845bb78cb95fb1" exitCode=0 Feb 19 00:22:22 crc kubenswrapper[4889]: I0219 00:22:22.745415 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerDied","Data":"d66d2085cdf4ae907e5c6a18449ae98e3dc1c1fc288d16a861845bb78cb95fb1"} Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.027661 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.169697 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-root\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.169743 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-pull\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.169777 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-proxy-ca-bundles\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.169808 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-blob-cache\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.169905 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-system-configs\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.169934 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-push\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170001 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gpkb\" (UniqueName: \"kubernetes.io/projected/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-kube-api-access-4gpkb\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170099 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-ca-bundles\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170142 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildcachedir\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170195 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildworkdir\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170262 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-node-pullsecrets\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170297 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-run\") pod \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\" (UID: \"f15e83b1-a9b2-472f-b596-dc3dc3c2257f\") " Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170472 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.170737 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.171097 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.171119 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.171289 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.171303 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.171933 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.173202 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.176140 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.177237 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.177718 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-kube-api-access-4gpkb" (OuterVolumeSpecName: "kube-api-access-4gpkb") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "kube-api-access-4gpkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.177760 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.272991 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273339 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273482 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273545 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273644 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273700 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gpkb\" (UniqueName: \"kubernetes.io/projected/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-kube-api-access-4gpkb\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273760 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.273816 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.355941 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.375431 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.763065 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"f15e83b1-a9b2-472f-b596-dc3dc3c2257f","Type":"ContainerDied","Data":"82a75c72f0c81669f11b43428f82d0cb9d21b2b0fecf83f5038947d6b2cf8630"} Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.763126 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a75c72f0c81669f11b43428f82d0cb9d21b2b0fecf83f5038947d6b2cf8630" Feb 19 00:22:24 crc kubenswrapper[4889]: I0219 00:22:24.763238 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Feb 19 00:22:26 crc kubenswrapper[4889]: I0219 00:22:26.115954 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f15e83b1-a9b2-472f-b596-dc3dc3c2257f" (UID: "f15e83b1-a9b2-472f-b596-dc3dc3c2257f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:26 crc kubenswrapper[4889]: I0219 00:22:26.120534 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f15e83b1-a9b2-472f-b596-dc3dc3c2257f-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.517724 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hzthp"] Feb 19 00:22:27 crc kubenswrapper[4889]: E0219 00:22:27.518521 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="docker-build" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518539 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="docker-build" Feb 19 00:22:27 crc kubenswrapper[4889]: E0219 00:22:27.518549 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="git-clone" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518558 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="git-clone" Feb 19 00:22:27 crc kubenswrapper[4889]: E0219 00:22:27.518565 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerName="manage-dockerfile" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518571 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerName="manage-dockerfile" Feb 19 00:22:27 crc kubenswrapper[4889]: E0219 00:22:27.518582 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerName="docker-build" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518588 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerName="docker-build" Feb 19 00:22:27 crc kubenswrapper[4889]: E0219 00:22:27.518603 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="manage-dockerfile" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518609 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="manage-dockerfile" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518742 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cc1e6e8-fdf3-4dc0-92e3-62f359fd1a0b" containerName="docker-build" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.518756 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="f15e83b1-a9b2-472f-b596-dc3dc3c2257f" containerName="docker-build" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.519805 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.535572 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzthp"] Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.644865 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4cxx\" (UniqueName: \"kubernetes.io/projected/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-kube-api-access-t4cxx\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.644921 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-utilities\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.644955 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-catalog-content\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.746771 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4cxx\" (UniqueName: \"kubernetes.io/projected/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-kube-api-access-t4cxx\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.746848 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-utilities\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.746883 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-catalog-content\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.748275 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-catalog-content\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.748266 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-utilities\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.782389 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4cxx\" (UniqueName: \"kubernetes.io/projected/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-kube-api-access-t4cxx\") pod \"community-operators-hzthp\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:27 crc kubenswrapper[4889]: I0219 00:22:27.836270 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.392037 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzthp"] Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.779904 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.781380 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.785812 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.786076 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.788929 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.789103 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.808682 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.808800 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerStarted","Data":"93dd21705a4ee6a95ae1607b6650da95d41536536b113290df87a4ff718ffaa0"} Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.862713 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildworkdir\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.862784 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-run\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.862826 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-root\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.862851 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.862870 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-system-configs\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.862891 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.863146 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.863465 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5gq\" (UniqueName: \"kubernetes.io/projected/7f6b43ec-472b-45de-9f5a-3e46d6a95910-kube-api-access-ft5gq\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.863520 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.863656 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-push\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.863789 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-pull\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.863977 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildcachedir\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966462 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-pull\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966581 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildcachedir\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966613 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildworkdir\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966640 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-run\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966665 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-root\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966682 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966698 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-system-configs\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966720 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966741 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966790 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5gq\" (UniqueName: \"kubernetes.io/projected/7f6b43ec-472b-45de-9f5a-3e46d6a95910-kube-api-access-ft5gq\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966809 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.966837 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-push\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.967318 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildcachedir\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.967938 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildworkdir\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.968269 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-run\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.968329 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-root\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.968496 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.968698 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-system-configs\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.968786 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.968913 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.969465 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.973696 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-pull\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.973820 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-push\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:28 crc kubenswrapper[4889]: I0219 00:22:28.991240 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5gq\" (UniqueName: \"kubernetes.io/projected/7f6b43ec-472b-45de-9f5a-3e46d6a95910-kube-api-access-ft5gq\") pod \"sg-core-1-build\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " pod="service-telemetry/sg-core-1-build" Feb 19 00:22:29 crc kubenswrapper[4889]: I0219 00:22:29.105427 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:22:29 crc kubenswrapper[4889]: I0219 00:22:29.641691 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:22:29 crc kubenswrapper[4889]: I0219 00:22:29.828206 4889 generic.go:334] "Generic (PLEG): container finished" podID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerID="cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0" exitCode=0 Feb 19 00:22:29 crc kubenswrapper[4889]: I0219 00:22:29.828373 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerDied","Data":"cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0"} Feb 19 00:22:29 crc kubenswrapper[4889]: I0219 00:22:29.830068 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"7f6b43ec-472b-45de-9f5a-3e46d6a95910","Type":"ContainerStarted","Data":"27a2c9852e26a834f6f2f8afe51eb6ebbb1e4e9cdd725e9bac0cda45887ef855"} Feb 19 00:22:29 crc kubenswrapper[4889]: I0219 00:22:29.833957 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:22:30 crc kubenswrapper[4889]: I0219 00:22:30.840108 4889 generic.go:334] "Generic (PLEG): container finished" podID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerID="d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e" exitCode=0 Feb 19 00:22:30 crc kubenswrapper[4889]: I0219 00:22:30.840280 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"7f6b43ec-472b-45de-9f5a-3e46d6a95910","Type":"ContainerDied","Data":"d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e"} Feb 19 00:22:30 crc kubenswrapper[4889]: I0219 00:22:30.845630 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerStarted","Data":"b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974"} Feb 19 00:22:31 crc kubenswrapper[4889]: I0219 00:22:31.856322 4889 generic.go:334] "Generic (PLEG): container finished" podID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerID="b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974" exitCode=0 Feb 19 00:22:31 crc kubenswrapper[4889]: I0219 00:22:31.856379 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerDied","Data":"b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974"} Feb 19 00:22:31 crc kubenswrapper[4889]: I0219 00:22:31.860273 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"7f6b43ec-472b-45de-9f5a-3e46d6a95910","Type":"ContainerStarted","Data":"ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636"} Feb 19 00:22:31 crc kubenswrapper[4889]: I0219 00:22:31.913132 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.913107668 podStartE2EDuration="3.913107668s" podCreationTimestamp="2026-02-19 00:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:22:31.90440222 +0000 UTC m=+977.869067211" watchObservedRunningTime="2026-02-19 00:22:31.913107668 +0000 UTC m=+977.877772659" Feb 19 00:22:32 crc kubenswrapper[4889]: I0219 00:22:32.871556 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerStarted","Data":"da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df"} Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.556358 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hzthp" podStartSLOduration=7.005816371 podStartE2EDuration="9.556335098s" podCreationTimestamp="2026-02-19 00:22:27 +0000 UTC" firstStartedPulling="2026-02-19 00:22:29.83369316 +0000 UTC m=+975.798358151" lastFinishedPulling="2026-02-19 00:22:32.384211887 +0000 UTC m=+978.348876878" observedRunningTime="2026-02-19 00:22:32.900376489 +0000 UTC m=+978.865041490" watchObservedRunningTime="2026-02-19 00:22:36.556335098 +0000 UTC m=+982.521000079" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.556939 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xxct2"] Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.558079 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.584384 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxct2"] Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.694070 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cs2p\" (UniqueName: \"kubernetes.io/projected/57176bf3-d497-4a8d-99fa-df7873c7e446-kube-api-access-7cs2p\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.694177 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-utilities\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.694255 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-catalog-content\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.796001 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-catalog-content\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.796656 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cs2p\" (UniqueName: \"kubernetes.io/projected/57176bf3-d497-4a8d-99fa-df7873c7e446-kube-api-access-7cs2p\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.796718 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-utilities\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.797331 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-utilities\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.797579 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-catalog-content\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.821725 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cs2p\" (UniqueName: \"kubernetes.io/projected/57176bf3-d497-4a8d-99fa-df7873c7e446-kube-api-access-7cs2p\") pod \"redhat-operators-xxct2\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:36 crc kubenswrapper[4889]: I0219 00:22:36.878289 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.150322 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xxct2"] Feb 19 00:22:37 crc kubenswrapper[4889]: W0219 00:22:37.170627 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57176bf3_d497_4a8d_99fa_df7873c7e446.slice/crio-b526a0e88e673a745016ca18b9f79aee13a19e1401344ecc8c57c3bf5fd0175a WatchSource:0}: Error finding container b526a0e88e673a745016ca18b9f79aee13a19e1401344ecc8c57c3bf5fd0175a: Status 404 returned error can't find the container with id b526a0e88e673a745016ca18b9f79aee13a19e1401344ecc8c57c3bf5fd0175a Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.782437 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.782517 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.836425 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.836830 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.893368 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.908971 4889 generic.go:334] "Generic (PLEG): container finished" podID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerID="36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773" exitCode=0 Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.910477 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerDied","Data":"36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773"} Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.910508 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerStarted","Data":"b526a0e88e673a745016ca18b9f79aee13a19e1401344ecc8c57c3bf5fd0175a"} Feb 19 00:22:37 crc kubenswrapper[4889]: I0219 00:22:37.965944 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:38 crc kubenswrapper[4889]: I0219 00:22:38.922380 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerStarted","Data":"f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8"} Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.161964 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.162285 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerName="docker-build" containerID="cri-o://ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636" gracePeriod=30 Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.557107 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_7f6b43ec-472b-45de-9f5a-3e46d6a95910/docker-build/0.log" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.557740 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.646355 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-run\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.646714 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-proxy-ca-bundles\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.646816 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5gq\" (UniqueName: \"kubernetes.io/projected/7f6b43ec-472b-45de-9f5a-3e46d6a95910-kube-api-access-ft5gq\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.646936 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-node-pullsecrets\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647040 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-root\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647113 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-system-configs\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647238 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-pull\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647346 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-blob-cache\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647450 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildcachedir\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647955 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-push\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.648198 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildworkdir\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.648351 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-ca-bundles\") pod \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\" (UID: \"7f6b43ec-472b-45de-9f5a-3e46d6a95910\") " Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647057 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647633 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647656 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647801 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.647860 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649034 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649438 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649521 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649594 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649657 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649714 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649770 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.649745 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.654402 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.654605 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6b43ec-472b-45de-9f5a-3e46d6a95910-kube-api-access-ft5gq" (OuterVolumeSpecName: "kube-api-access-ft5gq") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "kube-api-access-ft5gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.669720 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.744009 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.752038 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5gq\" (UniqueName: \"kubernetes.io/projected/7f6b43ec-472b-45de-9f5a-3e46d6a95910-kube-api-access-ft5gq\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.752125 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.752161 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.752170 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/7f6b43ec-472b-45de-9f5a-3e46d6a95910-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.752180 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7f6b43ec-472b-45de-9f5a-3e46d6a95910-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.785501 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7f6b43ec-472b-45de-9f5a-3e46d6a95910" (UID: "7f6b43ec-472b-45de-9f5a-3e46d6a95910"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.853193 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7f6b43ec-472b-45de-9f5a-3e46d6a95910-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.928915 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_7f6b43ec-472b-45de-9f5a-3e46d6a95910/docker-build/0.log" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.929613 4889 generic.go:334] "Generic (PLEG): container finished" podID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerID="ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636" exitCode=1 Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.929714 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"7f6b43ec-472b-45de-9f5a-3e46d6a95910","Type":"ContainerDied","Data":"ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636"} Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.929729 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.929760 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"7f6b43ec-472b-45de-9f5a-3e46d6a95910","Type":"ContainerDied","Data":"27a2c9852e26a834f6f2f8afe51eb6ebbb1e4e9cdd725e9bac0cda45887ef855"} Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.929781 4889 scope.go:117] "RemoveContainer" containerID="ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.932721 4889 generic.go:334] "Generic (PLEG): container finished" podID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerID="f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8" exitCode=0 Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.932744 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerDied","Data":"f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8"} Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.976384 4889 scope.go:117] "RemoveContainer" containerID="d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e" Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.983771 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:22:39 crc kubenswrapper[4889]: I0219 00:22:39.990482 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.008056 4889 scope.go:117] "RemoveContainer" containerID="ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636" Feb 19 00:22:40 crc kubenswrapper[4889]: E0219 00:22:40.008738 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636\": container with ID starting with ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636 not found: ID does not exist" containerID="ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.008802 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636"} err="failed to get container status \"ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636\": rpc error: code = NotFound desc = could not find container \"ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636\": container with ID starting with ab6a3d7d3183579d25dc91c24c04b30595e7181a0b2fd978fc6d095e998d1636 not found: ID does not exist" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.009004 4889 scope.go:117] "RemoveContainer" containerID="d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e" Feb 19 00:22:40 crc kubenswrapper[4889]: E0219 00:22:40.009843 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e\": container with ID starting with d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e not found: ID does not exist" containerID="d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.009883 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e"} err="failed to get container status \"d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e\": rpc error: code = NotFound desc = could not find container \"d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e\": container with ID starting with d0484c01c8255867a249813ecd3d5f510df4dc629f5cf6d564ab215ffd6fe73e not found: ID does not exist" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.134408 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzthp"] Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.738655 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" path="/var/lib/kubelet/pods/7f6b43ec-472b-45de-9f5a-3e46d6a95910/volumes" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.776742 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 19 00:22:40 crc kubenswrapper[4889]: E0219 00:22:40.779026 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerName="manage-dockerfile" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.779153 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerName="manage-dockerfile" Feb 19 00:22:40 crc kubenswrapper[4889]: E0219 00:22:40.779287 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerName="docker-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.779370 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerName="docker-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.779630 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f6b43ec-472b-45de-9f5a-3e46d6a95910" containerName="docker-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.780736 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.783548 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.783938 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.784195 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.784471 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.814605 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.869140 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.869510 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.869629 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.869849 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l56zp\" (UniqueName: \"kubernetes.io/projected/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-kube-api-access-l56zp\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870014 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870093 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870204 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-push\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870315 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870410 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870500 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-pull\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870577 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.870659 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.945135 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerStarted","Data":"d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a"} Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.945727 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hzthp" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="registry-server" containerID="cri-o://da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df" gracePeriod=2 Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972281 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972343 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-push\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972371 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972404 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972429 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-pull\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972459 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972486 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972527 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972568 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972612 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972645 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l56zp\" (UniqueName: \"kubernetes.io/projected/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-kube-api-access-l56zp\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972696 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972735 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildcachedir\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.972813 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.973282 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.973527 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-system-configs\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.973798 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-run\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.973829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.973876 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.974091 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-root\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.974447 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildworkdir\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.975590 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xxct2" podStartSLOduration=2.487427791 podStartE2EDuration="4.97550599s" podCreationTimestamp="2026-02-19 00:22:36 +0000 UTC" firstStartedPulling="2026-02-19 00:22:37.911838799 +0000 UTC m=+983.876503790" lastFinishedPulling="2026-02-19 00:22:40.399916998 +0000 UTC m=+986.364581989" observedRunningTime="2026-02-19 00:22:40.968563489 +0000 UTC m=+986.933228490" watchObservedRunningTime="2026-02-19 00:22:40.97550599 +0000 UTC m=+986.940170991" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.983997 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-push\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.985115 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-pull\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:40 crc kubenswrapper[4889]: I0219 00:22:40.993514 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l56zp\" (UniqueName: \"kubernetes.io/projected/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-kube-api-access-l56zp\") pod \"sg-core-2-build\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " pod="service-telemetry/sg-core-2-build" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.108467 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.335309 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.480905 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-utilities\") pod \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.481183 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-catalog-content\") pod \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.481280 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4cxx\" (UniqueName: \"kubernetes.io/projected/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-kube-api-access-t4cxx\") pod \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\" (UID: \"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe\") " Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.482621 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-utilities" (OuterVolumeSpecName: "utilities") pod "a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" (UID: "a6c576cf-f3e6-4a28-9686-6bd4fa028dbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.485728 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-kube-api-access-t4cxx" (OuterVolumeSpecName: "kube-api-access-t4cxx") pod "a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" (UID: "a6c576cf-f3e6-4a28-9686-6bd4fa028dbe"). InnerVolumeSpecName "kube-api-access-t4cxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.545933 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" (UID: "a6c576cf-f3e6-4a28-9686-6bd4fa028dbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.583155 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.583204 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4cxx\" (UniqueName: \"kubernetes.io/projected/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-kube-api-access-t4cxx\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.583239 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.603276 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.955246 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerStarted","Data":"82f10369bf36e9d38aa6ab960a6eb06fcbba94a5fa338083cacacb50933cb86d"} Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.957097 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerStarted","Data":"d9a0def49a9d22d7deb3b7f07ade25b9f774eebfee0c1de9735a9d62244c0374"} Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.965593 4889 generic.go:334] "Generic (PLEG): container finished" podID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerID="da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df" exitCode=0 Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.965734 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzthp" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.965737 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerDied","Data":"da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df"} Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.966121 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzthp" event={"ID":"a6c576cf-f3e6-4a28-9686-6bd4fa028dbe","Type":"ContainerDied","Data":"93dd21705a4ee6a95ae1607b6650da95d41536536b113290df87a4ff718ffaa0"} Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.966167 4889 scope.go:117] "RemoveContainer" containerID="da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df" Feb 19 00:22:41 crc kubenswrapper[4889]: I0219 00:22:41.985935 4889 scope.go:117] "RemoveContainer" containerID="b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.032853 4889 scope.go:117] "RemoveContainer" containerID="cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.042535 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzthp"] Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.048076 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hzthp"] Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.071059 4889 scope.go:117] "RemoveContainer" containerID="da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df" Feb 19 00:22:42 crc kubenswrapper[4889]: E0219 00:22:42.072308 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df\": container with ID starting with da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df not found: ID does not exist" containerID="da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.072363 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df"} err="failed to get container status \"da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df\": rpc error: code = NotFound desc = could not find container \"da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df\": container with ID starting with da4fa9dfc718c922fa095c1cd1f517bd8861c06c31ebf86a8478390a480f88df not found: ID does not exist" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.072398 4889 scope.go:117] "RemoveContainer" containerID="b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974" Feb 19 00:22:42 crc kubenswrapper[4889]: E0219 00:22:42.072708 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974\": container with ID starting with b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974 not found: ID does not exist" containerID="b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.072739 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974"} err="failed to get container status \"b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974\": rpc error: code = NotFound desc = could not find container \"b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974\": container with ID starting with b3969616137283f2109fadf6a10d3190a1cede27e537eb75c82dd90002784974 not found: ID does not exist" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.072756 4889 scope.go:117] "RemoveContainer" containerID="cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0" Feb 19 00:22:42 crc kubenswrapper[4889]: E0219 00:22:42.073012 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0\": container with ID starting with cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0 not found: ID does not exist" containerID="cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.073046 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0"} err="failed to get container status \"cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0\": rpc error: code = NotFound desc = could not find container \"cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0\": container with ID starting with cef233b295fda0f7692dcd2241177b6848c428dce32f962fd559aa5f47e731d0 not found: ID does not exist" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.735382 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" path="/var/lib/kubelet/pods/a6c576cf-f3e6-4a28-9686-6bd4fa028dbe/volumes" Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.976744 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerID="82f10369bf36e9d38aa6ab960a6eb06fcbba94a5fa338083cacacb50933cb86d" exitCode=0 Feb 19 00:22:42 crc kubenswrapper[4889]: I0219 00:22:42.976814 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerDied","Data":"82f10369bf36e9d38aa6ab960a6eb06fcbba94a5fa338083cacacb50933cb86d"} Feb 19 00:22:43 crc kubenswrapper[4889]: I0219 00:22:43.987196 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerID="591a61ad01eac6db59311422132fab68ac0d02bdbad0327ee9162bac18f7c613" exitCode=0 Feb 19 00:22:43 crc kubenswrapper[4889]: I0219 00:22:43.987284 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerDied","Data":"591a61ad01eac6db59311422132fab68ac0d02bdbad0327ee9162bac18f7c613"} Feb 19 00:22:44 crc kubenswrapper[4889]: I0219 00:22:44.050867 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09/manage-dockerfile/0.log" Feb 19 00:22:44 crc kubenswrapper[4889]: I0219 00:22:44.997271 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerStarted","Data":"9825a2d8e9615e4ede620e9ffa7e6b4d7b2240ad09cfda8a4b1a77af630b6a83"} Feb 19 00:22:45 crc kubenswrapper[4889]: I0219 00:22:45.030581 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.030549205 podStartE2EDuration="5.030549205s" podCreationTimestamp="2026-02-19 00:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:22:45.027683443 +0000 UTC m=+990.992348444" watchObservedRunningTime="2026-02-19 00:22:45.030549205 +0000 UTC m=+990.995214206" Feb 19 00:22:46 crc kubenswrapper[4889]: I0219 00:22:46.879111 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:46 crc kubenswrapper[4889]: I0219 00:22:46.879179 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:46 crc kubenswrapper[4889]: I0219 00:22:46.931534 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:47 crc kubenswrapper[4889]: I0219 00:22:47.058378 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:47 crc kubenswrapper[4889]: I0219 00:22:47.169357 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxct2"] Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.046301 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xxct2" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="registry-server" containerID="cri-o://d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a" gracePeriod=2 Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.421638 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.514475 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-catalog-content\") pod \"57176bf3-d497-4a8d-99fa-df7873c7e446\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.514554 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-utilities\") pod \"57176bf3-d497-4a8d-99fa-df7873c7e446\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.514612 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cs2p\" (UniqueName: \"kubernetes.io/projected/57176bf3-d497-4a8d-99fa-df7873c7e446-kube-api-access-7cs2p\") pod \"57176bf3-d497-4a8d-99fa-df7873c7e446\" (UID: \"57176bf3-d497-4a8d-99fa-df7873c7e446\") " Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.515491 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-utilities" (OuterVolumeSpecName: "utilities") pod "57176bf3-d497-4a8d-99fa-df7873c7e446" (UID: "57176bf3-d497-4a8d-99fa-df7873c7e446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.522354 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57176bf3-d497-4a8d-99fa-df7873c7e446-kube-api-access-7cs2p" (OuterVolumeSpecName: "kube-api-access-7cs2p") pod "57176bf3-d497-4a8d-99fa-df7873c7e446" (UID: "57176bf3-d497-4a8d-99fa-df7873c7e446"). InnerVolumeSpecName "kube-api-access-7cs2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.616738 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.616789 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cs2p\" (UniqueName: \"kubernetes.io/projected/57176bf3-d497-4a8d-99fa-df7873c7e446-kube-api-access-7cs2p\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.638911 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57176bf3-d497-4a8d-99fa-df7873c7e446" (UID: "57176bf3-d497-4a8d-99fa-df7873c7e446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:22:49 crc kubenswrapper[4889]: I0219 00:22:49.718607 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57176bf3-d497-4a8d-99fa-df7873c7e446-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.055352 4889 generic.go:334] "Generic (PLEG): container finished" podID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerID="d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a" exitCode=0 Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.055406 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerDied","Data":"d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a"} Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.055420 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xxct2" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.055452 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xxct2" event={"ID":"57176bf3-d497-4a8d-99fa-df7873c7e446","Type":"ContainerDied","Data":"b526a0e88e673a745016ca18b9f79aee13a19e1401344ecc8c57c3bf5fd0175a"} Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.055475 4889 scope.go:117] "RemoveContainer" containerID="d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.080600 4889 scope.go:117] "RemoveContainer" containerID="f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.094266 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xxct2"] Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.104011 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xxct2"] Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.106079 4889 scope.go:117] "RemoveContainer" containerID="36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.129360 4889 scope.go:117] "RemoveContainer" containerID="d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a" Feb 19 00:22:50 crc kubenswrapper[4889]: E0219 00:22:50.130120 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a\": container with ID starting with d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a not found: ID does not exist" containerID="d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.130191 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a"} err="failed to get container status \"d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a\": rpc error: code = NotFound desc = could not find container \"d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a\": container with ID starting with d4c4e5fd52a0380a05987c247d9ca4c3c61996cfba26ef919e24fb9a1885c40a not found: ID does not exist" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.130285 4889 scope.go:117] "RemoveContainer" containerID="f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8" Feb 19 00:22:50 crc kubenswrapper[4889]: E0219 00:22:50.130848 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8\": container with ID starting with f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8 not found: ID does not exist" containerID="f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.131000 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8"} err="failed to get container status \"f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8\": rpc error: code = NotFound desc = could not find container \"f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8\": container with ID starting with f3ecbacc72eaf55690e626f78f3053ea9db5169e00e9e0198c788451305ed9e8 not found: ID does not exist" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.131065 4889 scope.go:117] "RemoveContainer" containerID="36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773" Feb 19 00:22:50 crc kubenswrapper[4889]: E0219 00:22:50.131426 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773\": container with ID starting with 36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773 not found: ID does not exist" containerID="36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.131467 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773"} err="failed to get container status \"36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773\": rpc error: code = NotFound desc = could not find container \"36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773\": container with ID starting with 36beb29545423ea19a686fd2c7815eba5cea35b69d0b2fe574b6f9093a6fc773 not found: ID does not exist" Feb 19 00:22:50 crc kubenswrapper[4889]: I0219 00:22:50.751060 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" path="/var/lib/kubelet/pods/57176bf3-d497-4a8d-99fa-df7873c7e446/volumes" Feb 19 00:23:07 crc kubenswrapper[4889]: I0219 00:23:07.781730 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:23:07 crc kubenswrapper[4889]: I0219 00:23:07.784504 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:23:37 crc kubenswrapper[4889]: I0219 00:23:37.782065 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:23:37 crc kubenswrapper[4889]: I0219 00:23:37.783241 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:23:37 crc kubenswrapper[4889]: I0219 00:23:37.783413 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:23:37 crc kubenswrapper[4889]: I0219 00:23:37.784436 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"420ed3c300f26e608399f169c685b0129d5ac56bdcf0a85ef3838579d162572e"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:23:37 crc kubenswrapper[4889]: I0219 00:23:37.784532 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://420ed3c300f26e608399f169c685b0129d5ac56bdcf0a85ef3838579d162572e" gracePeriod=600 Feb 19 00:23:38 crc kubenswrapper[4889]: I0219 00:23:38.428842 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="420ed3c300f26e608399f169c685b0129d5ac56bdcf0a85ef3838579d162572e" exitCode=0 Feb 19 00:23:38 crc kubenswrapper[4889]: I0219 00:23:38.428944 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"420ed3c300f26e608399f169c685b0129d5ac56bdcf0a85ef3838579d162572e"} Feb 19 00:23:38 crc kubenswrapper[4889]: I0219 00:23:38.429083 4889 scope.go:117] "RemoveContainer" containerID="307e06c1533d97412f05ea0d1d6dcd394440c81bd6cadf9dd2c0ed35a7f70904" Feb 19 00:23:39 crc kubenswrapper[4889]: I0219 00:23:39.442002 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"a422434d7e3c6cbd938447bafd74a7de72de7d70515b5ce97089afc1e4e805bb"} Feb 19 00:26:07 crc kubenswrapper[4889]: I0219 00:26:07.782270 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:26:07 crc kubenswrapper[4889]: I0219 00:26:07.784397 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:26:13 crc kubenswrapper[4889]: I0219 00:26:13.560210 4889 generic.go:334] "Generic (PLEG): container finished" podID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerID="9825a2d8e9615e4ede620e9ffa7e6b4d7b2240ad09cfda8a4b1a77af630b6a83" exitCode=0 Feb 19 00:26:13 crc kubenswrapper[4889]: I0219 00:26:13.560325 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerDied","Data":"9825a2d8e9615e4ede620e9ffa7e6b4d7b2240ad09cfda8a4b1a77af630b6a83"} Feb 19 00:26:14 crc kubenswrapper[4889]: E0219 00:26:14.717381 4889 info.go:109] Failed to get network devices: open /sys/class/net/d9a0def49a9d22d/address: no such file or directory Feb 19 00:26:14 crc kubenswrapper[4889]: I0219 00:26:14.857805 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028014 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-pull\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028112 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildcachedir\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028200 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-blob-cache\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028285 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-system-configs\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028298 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028346 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-root\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028395 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-run\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028490 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildworkdir\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028549 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-node-pullsecrets\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028603 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-proxy-ca-bundles\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028727 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l56zp\" (UniqueName: \"kubernetes.io/projected/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-kube-api-access-l56zp\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028781 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-push\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.028830 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-ca-bundles\") pod \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\" (UID: \"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09\") " Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.029046 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.029266 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.029415 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.029450 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.029480 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.030339 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.030489 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.034822 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.034871 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.035127 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-kube-api-access-l56zp" (OuterVolumeSpecName: "kube-api-access-l56zp") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "kube-api-access-l56zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.042470 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.130095 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l56zp\" (UniqueName: \"kubernetes.io/projected/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-kube-api-access-l56zp\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.130137 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.130152 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.130162 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.130172 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.130184 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.578790 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09","Type":"ContainerDied","Data":"d9a0def49a9d22d7deb3b7f07ade25b9f774eebfee0c1de9735a9d62244c0374"} Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.578975 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a0def49a9d22d7deb3b7f07ade25b9f774eebfee0c1de9735a9d62244c0374" Feb 19 00:26:15 crc kubenswrapper[4889]: I0219 00:26:15.579154 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Feb 19 00:26:16 crc kubenswrapper[4889]: I0219 00:26:16.898953 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:16 crc kubenswrapper[4889]: I0219 00:26:16.960179 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:18 crc kubenswrapper[4889]: I0219 00:26:18.240380 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:18 crc kubenswrapper[4889]: I0219 00:26:18.278539 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:18 crc kubenswrapper[4889]: I0219 00:26:18.591749 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" (UID: "6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:18 crc kubenswrapper[4889]: I0219 00:26:18.685460 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349376 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349723 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="extract-utilities" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349737 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="extract-utilities" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349747 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="registry-server" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349753 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="registry-server" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349769 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="manage-dockerfile" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349776 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="manage-dockerfile" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349786 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="docker-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349791 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="docker-build" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349804 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="extract-content" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349810 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="extract-content" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349817 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="extract-utilities" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349825 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="extract-utilities" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349833 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="git-clone" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349839 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="git-clone" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349847 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="extract-content" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349855 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="extract-content" Feb 19 00:26:19 crc kubenswrapper[4889]: E0219 00:26:19.349865 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="registry-server" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349872 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="registry-server" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349986 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c576cf-f3e6-4a28-9686-6bd4fa028dbe" containerName="registry-server" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.349998 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="57176bf3-d497-4a8d-99fa-df7873c7e446" containerName="registry-server" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.350007 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1bc8fd-a546-4bb2-886d-f0ddf89c2c09" containerName="docker-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.350778 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.354987 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.355286 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.356390 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.358411 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.372508 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396718 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396774 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-pull\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396801 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396835 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k625p\" (UniqueName: \"kubernetes.io/projected/569df4f0-e4fa-4f86-997d-8af1d6d44b22-kube-api-access-k625p\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396857 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396873 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396911 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396938 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396956 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.396980 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.397007 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-push\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.397031 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.498048 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k625p\" (UniqueName: \"kubernetes.io/projected/569df4f0-e4fa-4f86-997d-8af1d6d44b22-kube-api-access-k625p\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.498536 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.498641 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.498737 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.498851 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.498951 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.499042 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.499159 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-push\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.500743 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.500799 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.499694 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.499770 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.500057 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.500364 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.499432 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.499587 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.501002 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.501198 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-pull\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.501276 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.501328 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.501670 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.511753 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-pull\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.514836 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-push\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.518710 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k625p\" (UniqueName: \"kubernetes.io/projected/569df4f0-e4fa-4f86-997d-8af1d6d44b22-kube-api-access-k625p\") pod \"sg-bridge-1-build\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:19 crc kubenswrapper[4889]: I0219 00:26:19.668053 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:20 crc kubenswrapper[4889]: I0219 00:26:20.141060 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:26:20 crc kubenswrapper[4889]: I0219 00:26:20.687792 4889 generic.go:334] "Generic (PLEG): container finished" podID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerID="a90516d91ff40681cc0c464fc8831c940cd090dfc183ec37423e290283c4b972" exitCode=0 Feb 19 00:26:20 crc kubenswrapper[4889]: I0219 00:26:20.687864 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"569df4f0-e4fa-4f86-997d-8af1d6d44b22","Type":"ContainerDied","Data":"a90516d91ff40681cc0c464fc8831c940cd090dfc183ec37423e290283c4b972"} Feb 19 00:26:20 crc kubenswrapper[4889]: I0219 00:26:20.687908 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"569df4f0-e4fa-4f86-997d-8af1d6d44b22","Type":"ContainerStarted","Data":"345d9d5d01375e7485049e450fce027b0b29c445b949a554ace594e3e533caed"} Feb 19 00:26:21 crc kubenswrapper[4889]: I0219 00:26:21.698120 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"569df4f0-e4fa-4f86-997d-8af1d6d44b22","Type":"ContainerStarted","Data":"24c3b40c6b81070917b4c9cd6602015820a76f11881ca666aafc75a45ad84028"} Feb 19 00:26:21 crc kubenswrapper[4889]: I0219 00:26:21.726537 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=2.726512648 podStartE2EDuration="2.726512648s" podCreationTimestamp="2026-02-19 00:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:26:21.721126759 +0000 UTC m=+1207.685791750" watchObservedRunningTime="2026-02-19 00:26:21.726512648 +0000 UTC m=+1207.691177639" Feb 19 00:26:29 crc kubenswrapper[4889]: I0219 00:26:29.759149 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_569df4f0-e4fa-4f86-997d-8af1d6d44b22/docker-build/0.log" Feb 19 00:26:29 crc kubenswrapper[4889]: I0219 00:26:29.760576 4889 generic.go:334] "Generic (PLEG): container finished" podID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerID="24c3b40c6b81070917b4c9cd6602015820a76f11881ca666aafc75a45ad84028" exitCode=1 Feb 19 00:26:29 crc kubenswrapper[4889]: I0219 00:26:29.760629 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"569df4f0-e4fa-4f86-997d-8af1d6d44b22","Type":"ContainerDied","Data":"24c3b40c6b81070917b4c9cd6602015820a76f11881ca666aafc75a45ad84028"} Feb 19 00:26:30 crc kubenswrapper[4889]: I0219 00:26:30.101611 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:26:30 crc kubenswrapper[4889]: I0219 00:26:30.989822 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_569df4f0-e4fa-4f86-997d-8af1d6d44b22/docker-build/0.log" Feb 19 00:26:30 crc kubenswrapper[4889]: I0219 00:26:30.991048 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179401 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-blob-cache\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179452 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildworkdir\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179506 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-root\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179585 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-system-configs\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179607 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-ca-bundles\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179632 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k625p\" (UniqueName: \"kubernetes.io/projected/569df4f0-e4fa-4f86-997d-8af1d6d44b22-kube-api-access-k625p\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179710 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-node-pullsecrets\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179760 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-run\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179786 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-proxy-ca-bundles\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179811 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-pull\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179827 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildcachedir\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.179858 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-push\") pod \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\" (UID: \"569df4f0-e4fa-4f86-997d-8af1d6d44b22\") " Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.180388 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.180767 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.180996 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.181358 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.181600 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.181660 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.181760 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.187205 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.187703 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.187735 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569df4f0-e4fa-4f86-997d-8af1d6d44b22-kube-api-access-k625p" (OuterVolumeSpecName: "kube-api-access-k625p") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "kube-api-access-k625p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.256296 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282055 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282111 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282123 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282164 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282176 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k625p\" (UniqueName: \"kubernetes.io/projected/569df4f0-e4fa-4f86-997d-8af1d6d44b22-kube-api-access-k625p\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282192 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282206 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282217 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282256 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/569df4f0-e4fa-4f86-997d-8af1d6d44b22-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282268 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/569df4f0-e4fa-4f86-997d-8af1d6d44b22-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.282279 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/569df4f0-e4fa-4f86-997d-8af1d6d44b22-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.554101 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "569df4f0-e4fa-4f86-997d-8af1d6d44b22" (UID: "569df4f0-e4fa-4f86-997d-8af1d6d44b22"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.586820 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/569df4f0-e4fa-4f86-997d-8af1d6d44b22-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.753768 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 19 00:26:31 crc kubenswrapper[4889]: E0219 00:26:31.754178 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerName="manage-dockerfile" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.754205 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerName="manage-dockerfile" Feb 19 00:26:31 crc kubenswrapper[4889]: E0219 00:26:31.754247 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerName="docker-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.754258 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerName="docker-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.754408 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" containerName="docker-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.755592 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.757786 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.757862 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.759309 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.775323 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.780326 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_569df4f0-e4fa-4f86-997d-8af1d6d44b22/docker-build/0.log" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.780772 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"569df4f0-e4fa-4f86-997d-8af1d6d44b22","Type":"ContainerDied","Data":"345d9d5d01375e7485049e450fce027b0b29c445b949a554ace594e3e533caed"} Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.780816 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345d9d5d01375e7485049e450fce027b0b29c445b949a554ace594e3e533caed" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.780905 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789138 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789214 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789283 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789320 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789345 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789372 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789411 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-push\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789440 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx65g\" (UniqueName: \"kubernetes.io/projected/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-kube-api-access-vx65g\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789486 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789567 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-pull\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789594 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.789622 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.830930 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.838218 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890541 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890633 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890669 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890693 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890722 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890723 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890761 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-push\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890762 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890792 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx65g\" (UniqueName: \"kubernetes.io/projected/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-kube-api-access-vx65g\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890826 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890867 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-pull\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890890 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890918 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.890951 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.891315 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.891483 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.891484 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.891987 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.892090 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.892214 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.892408 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.896862 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-pull\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.899627 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-push\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:31 crc kubenswrapper[4889]: I0219 00:26:31.910262 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx65g\" (UniqueName: \"kubernetes.io/projected/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-kube-api-access-vx65g\") pod \"sg-bridge-2-build\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:32 crc kubenswrapper[4889]: I0219 00:26:32.076879 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:26:32 crc kubenswrapper[4889]: I0219 00:26:32.301517 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Feb 19 00:26:32 crc kubenswrapper[4889]: I0219 00:26:32.733168 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569df4f0-e4fa-4f86-997d-8af1d6d44b22" path="/var/lib/kubelet/pods/569df4f0-e4fa-4f86-997d-8af1d6d44b22/volumes" Feb 19 00:26:32 crc kubenswrapper[4889]: I0219 00:26:32.787372 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerStarted","Data":"bac8e320120390102da7d6df2f85c9d375f466098d8573e18386c9f7c01394eb"} Feb 19 00:26:32 crc kubenswrapper[4889]: I0219 00:26:32.787441 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerStarted","Data":"9023ce75153befb66a09817d8b11efa76bc936d541a6d77399bbbf44494eba12"} Feb 19 00:26:33 crc kubenswrapper[4889]: I0219 00:26:33.796017 4889 generic.go:334] "Generic (PLEG): container finished" podID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerID="bac8e320120390102da7d6df2f85c9d375f466098d8573e18386c9f7c01394eb" exitCode=0 Feb 19 00:26:33 crc kubenswrapper[4889]: I0219 00:26:33.796085 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerDied","Data":"bac8e320120390102da7d6df2f85c9d375f466098d8573e18386c9f7c01394eb"} Feb 19 00:26:34 crc kubenswrapper[4889]: I0219 00:26:34.804446 4889 generic.go:334] "Generic (PLEG): container finished" podID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerID="4267e2bfa4740d0a65a488e428eae1b549ff13e52ccb0f307bf62e9e0fddb3cb" exitCode=0 Feb 19 00:26:34 crc kubenswrapper[4889]: I0219 00:26:34.804534 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerDied","Data":"4267e2bfa4740d0a65a488e428eae1b549ff13e52ccb0f307bf62e9e0fddb3cb"} Feb 19 00:26:34 crc kubenswrapper[4889]: I0219 00:26:34.838290 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_cf98447e-d875-4d54-8aa5-ad99a6ef24dc/manage-dockerfile/0.log" Feb 19 00:26:35 crc kubenswrapper[4889]: I0219 00:26:35.818830 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerStarted","Data":"841e6d04390ae6bc83ecac24c1a97d256f94cff2f0b13c759a8a82e9179c26b2"} Feb 19 00:26:35 crc kubenswrapper[4889]: I0219 00:26:35.844753 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=4.844727891 podStartE2EDuration="4.844727891s" podCreationTimestamp="2026-02-19 00:26:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:26:35.843500095 +0000 UTC m=+1221.808165086" watchObservedRunningTime="2026-02-19 00:26:35.844727891 +0000 UTC m=+1221.809392892" Feb 19 00:26:37 crc kubenswrapper[4889]: I0219 00:26:37.782002 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:26:37 crc kubenswrapper[4889]: I0219 00:26:37.782591 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.008251 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g2h6p"] Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.010252 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.022221 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g2h6p"] Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.187330 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-utilities\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.187405 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-catalog-content\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.187436 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd5qc\" (UniqueName: \"kubernetes.io/projected/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-kube-api-access-hd5qc\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.289172 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-utilities\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.289274 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-catalog-content\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.289312 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd5qc\" (UniqueName: \"kubernetes.io/projected/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-kube-api-access-hd5qc\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.289930 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-catalog-content\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.289927 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-utilities\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.322160 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd5qc\" (UniqueName: \"kubernetes.io/projected/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-kube-api-access-hd5qc\") pod \"certified-operators-g2h6p\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.332465 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.606356 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g2h6p"] Feb 19 00:26:51 crc kubenswrapper[4889]: W0219 00:26:51.616476 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0c0628a_019f_4b2f_9c2d_8edf64d2475c.slice/crio-78d28010298484596e376dad3bf276f2bf7f6429fea1310407a6bae864e2287f WatchSource:0}: Error finding container 78d28010298484596e376dad3bf276f2bf7f6429fea1310407a6bae864e2287f: Status 404 returned error can't find the container with id 78d28010298484596e376dad3bf276f2bf7f6429fea1310407a6bae864e2287f Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.940532 4889 generic.go:334] "Generic (PLEG): container finished" podID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerID="ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97" exitCode=0 Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.940661 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2h6p" event={"ID":"b0c0628a-019f-4b2f-9c2d-8edf64d2475c","Type":"ContainerDied","Data":"ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97"} Feb 19 00:26:51 crc kubenswrapper[4889]: I0219 00:26:51.941016 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2h6p" event={"ID":"b0c0628a-019f-4b2f-9c2d-8edf64d2475c","Type":"ContainerStarted","Data":"78d28010298484596e376dad3bf276f2bf7f6429fea1310407a6bae864e2287f"} Feb 19 00:26:52 crc kubenswrapper[4889]: I0219 00:26:52.949080 4889 generic.go:334] "Generic (PLEG): container finished" podID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerID="708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7" exitCode=0 Feb 19 00:26:52 crc kubenswrapper[4889]: I0219 00:26:52.949150 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2h6p" event={"ID":"b0c0628a-019f-4b2f-9c2d-8edf64d2475c","Type":"ContainerDied","Data":"708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7"} Feb 19 00:26:53 crc kubenswrapper[4889]: I0219 00:26:53.970709 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2h6p" event={"ID":"b0c0628a-019f-4b2f-9c2d-8edf64d2475c","Type":"ContainerStarted","Data":"4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28"} Feb 19 00:26:53 crc kubenswrapper[4889]: I0219 00:26:53.994248 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g2h6p" podStartSLOduration=2.5935474469999997 podStartE2EDuration="3.994198589s" podCreationTimestamp="2026-02-19 00:26:50 +0000 UTC" firstStartedPulling="2026-02-19 00:26:51.942360762 +0000 UTC m=+1237.907025753" lastFinishedPulling="2026-02-19 00:26:53.343011904 +0000 UTC m=+1239.307676895" observedRunningTime="2026-02-19 00:26:53.991491648 +0000 UTC m=+1239.956156649" watchObservedRunningTime="2026-02-19 00:26:53.994198589 +0000 UTC m=+1239.958863580" Feb 19 00:27:01 crc kubenswrapper[4889]: I0219 00:27:01.333272 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:27:01 crc kubenswrapper[4889]: I0219 00:27:01.334244 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:27:01 crc kubenswrapper[4889]: I0219 00:27:01.374331 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:27:02 crc kubenswrapper[4889]: I0219 00:27:02.073935 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:27:02 crc kubenswrapper[4889]: I0219 00:27:02.148709 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g2h6p"] Feb 19 00:27:04 crc kubenswrapper[4889]: I0219 00:27:04.039260 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g2h6p" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="registry-server" containerID="cri-o://4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28" gracePeriod=2 Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.707321 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.770502 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-utilities\") pod \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.770617 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd5qc\" (UniqueName: \"kubernetes.io/projected/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-kube-api-access-hd5qc\") pod \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.770734 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-catalog-content\") pod \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\" (UID: \"b0c0628a-019f-4b2f-9c2d-8edf64d2475c\") " Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.772601 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-utilities" (OuterVolumeSpecName: "utilities") pod "b0c0628a-019f-4b2f-9c2d-8edf64d2475c" (UID: "b0c0628a-019f-4b2f-9c2d-8edf64d2475c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.777818 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-kube-api-access-hd5qc" (OuterVolumeSpecName: "kube-api-access-hd5qc") pod "b0c0628a-019f-4b2f-9c2d-8edf64d2475c" (UID: "b0c0628a-019f-4b2f-9c2d-8edf64d2475c"). InnerVolumeSpecName "kube-api-access-hd5qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.827270 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0c0628a-019f-4b2f-9c2d-8edf64d2475c" (UID: "b0c0628a-019f-4b2f-9c2d-8edf64d2475c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.872720 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.872762 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:05 crc kubenswrapper[4889]: I0219 00:27:05.872775 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd5qc\" (UniqueName: \"kubernetes.io/projected/b0c0628a-019f-4b2f-9c2d-8edf64d2475c-kube-api-access-hd5qc\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.056077 4889 generic.go:334] "Generic (PLEG): container finished" podID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerID="4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28" exitCode=0 Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.056151 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2h6p" event={"ID":"b0c0628a-019f-4b2f-9c2d-8edf64d2475c","Type":"ContainerDied","Data":"4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28"} Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.056202 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2h6p" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.056264 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2h6p" event={"ID":"b0c0628a-019f-4b2f-9c2d-8edf64d2475c","Type":"ContainerDied","Data":"78d28010298484596e376dad3bf276f2bf7f6429fea1310407a6bae864e2287f"} Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.056321 4889 scope.go:117] "RemoveContainer" containerID="4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.078668 4889 scope.go:117] "RemoveContainer" containerID="708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.086896 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g2h6p"] Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.099409 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g2h6p"] Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.124107 4889 scope.go:117] "RemoveContainer" containerID="ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.142397 4889 scope.go:117] "RemoveContainer" containerID="4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28" Feb 19 00:27:06 crc kubenswrapper[4889]: E0219 00:27:06.144374 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28\": container with ID starting with 4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28 not found: ID does not exist" containerID="4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.144444 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28"} err="failed to get container status \"4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28\": rpc error: code = NotFound desc = could not find container \"4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28\": container with ID starting with 4dff12fb7dbd8ade61e6f3a33a866ca42a11205f54e52be6a2709e1a02010e28 not found: ID does not exist" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.144485 4889 scope.go:117] "RemoveContainer" containerID="708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7" Feb 19 00:27:06 crc kubenswrapper[4889]: E0219 00:27:06.145096 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7\": container with ID starting with 708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7 not found: ID does not exist" containerID="708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.145139 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7"} err="failed to get container status \"708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7\": rpc error: code = NotFound desc = could not find container \"708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7\": container with ID starting with 708c3c13692c4ab5233d708cfb7b90a191249e8cb208ed037df76ab8ec9c4ac7 not found: ID does not exist" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.145174 4889 scope.go:117] "RemoveContainer" containerID="ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97" Feb 19 00:27:06 crc kubenswrapper[4889]: E0219 00:27:06.145542 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97\": container with ID starting with ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97 not found: ID does not exist" containerID="ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.145580 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97"} err="failed to get container status \"ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97\": rpc error: code = NotFound desc = could not find container \"ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97\": container with ID starting with ee201122daa0db66bc691cda2b8cfcf2f708a8b7838355942c9f35ab382aad97 not found: ID does not exist" Feb 19 00:27:06 crc kubenswrapper[4889]: I0219 00:27:06.734239 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" path="/var/lib/kubelet/pods/b0c0628a-019f-4b2f-9c2d-8edf64d2475c/volumes" Feb 19 00:27:07 crc kubenswrapper[4889]: I0219 00:27:07.782306 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:27:07 crc kubenswrapper[4889]: I0219 00:27:07.783541 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:27:07 crc kubenswrapper[4889]: I0219 00:27:07.783685 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:27:07 crc kubenswrapper[4889]: I0219 00:27:07.784692 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a422434d7e3c6cbd938447bafd74a7de72de7d70515b5ce97089afc1e4e805bb"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:27:07 crc kubenswrapper[4889]: I0219 00:27:07.784776 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://a422434d7e3c6cbd938447bafd74a7de72de7d70515b5ce97089afc1e4e805bb" gracePeriod=600 Feb 19 00:27:09 crc kubenswrapper[4889]: I0219 00:27:09.081687 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="a422434d7e3c6cbd938447bafd74a7de72de7d70515b5ce97089afc1e4e805bb" exitCode=0 Feb 19 00:27:09 crc kubenswrapper[4889]: I0219 00:27:09.081789 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"a422434d7e3c6cbd938447bafd74a7de72de7d70515b5ce97089afc1e4e805bb"} Feb 19 00:27:09 crc kubenswrapper[4889]: I0219 00:27:09.082106 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"40c89b17b39712776e841fba6d4f65f04b05a7818e694b7b66336b8111d61491"} Feb 19 00:27:09 crc kubenswrapper[4889]: I0219 00:27:09.082137 4889 scope.go:117] "RemoveContainer" containerID="420ed3c300f26e608399f169c685b0129d5ac56bdcf0a85ef3838579d162572e" Feb 19 00:27:22 crc kubenswrapper[4889]: I0219 00:27:22.174572 4889 generic.go:334] "Generic (PLEG): container finished" podID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerID="841e6d04390ae6bc83ecac24c1a97d256f94cff2f0b13c759a8a82e9179c26b2" exitCode=0 Feb 19 00:27:22 crc kubenswrapper[4889]: I0219 00:27:22.175299 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerDied","Data":"841e6d04390ae6bc83ecac24c1a97d256f94cff2f0b13c759a8a82e9179c26b2"} Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.420995 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554018 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-node-pullsecrets\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554541 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildworkdir\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554662 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildcachedir\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554815 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-blob-cache\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554923 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-run\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554192 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.554795 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555112 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-pull\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555333 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx65g\" (UniqueName: \"kubernetes.io/projected/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-kube-api-access-vx65g\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555435 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-push\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555541 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-proxy-ca-bundles\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555633 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-root\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555749 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-ca-bundles\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.555822 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-system-configs\") pod \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\" (UID: \"cf98447e-d875-4d54-8aa5-ad99a6ef24dc\") " Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556167 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556278 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556182 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556270 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556349 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556716 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.556813 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.562423 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-kube-api-access-vx65g" (OuterVolumeSpecName: "kube-api-access-vx65g") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "kube-api-access-vx65g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.562421 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.562464 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658629 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658678 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658690 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx65g\" (UniqueName: \"kubernetes.io/projected/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-kube-api-access-vx65g\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658699 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658709 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658719 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658728 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.658741 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.685460 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:23 crc kubenswrapper[4889]: I0219 00:27:23.760590 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:24 crc kubenswrapper[4889]: I0219 00:27:24.193417 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"cf98447e-d875-4d54-8aa5-ad99a6ef24dc","Type":"ContainerDied","Data":"9023ce75153befb66a09817d8b11efa76bc936d541a6d77399bbbf44494eba12"} Feb 19 00:27:24 crc kubenswrapper[4889]: I0219 00:27:24.193467 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9023ce75153befb66a09817d8b11efa76bc936d541a6d77399bbbf44494eba12" Feb 19 00:27:24 crc kubenswrapper[4889]: I0219 00:27:24.193851 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Feb 19 00:27:24 crc kubenswrapper[4889]: I0219 00:27:24.275336 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "cf98447e-d875-4d54-8aa5-ad99a6ef24dc" (UID: "cf98447e-d875-4d54-8aa5-ad99a6ef24dc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:24 crc kubenswrapper[4889]: I0219 00:27:24.370455 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/cf98447e-d875-4d54-8aa5-ad99a6ef24dc-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.952109 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:27:27 crc kubenswrapper[4889]: E0219 00:27:27.953249 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="extract-utilities" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953264 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="extract-utilities" Feb 19 00:27:27 crc kubenswrapper[4889]: E0219 00:27:27.953275 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="manage-dockerfile" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953282 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="manage-dockerfile" Feb 19 00:27:27 crc kubenswrapper[4889]: E0219 00:27:27.953295 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="docker-build" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953302 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="docker-build" Feb 19 00:27:27 crc kubenswrapper[4889]: E0219 00:27:27.953310 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="extract-content" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953317 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="extract-content" Feb 19 00:27:27 crc kubenswrapper[4889]: E0219 00:27:27.953329 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="git-clone" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953338 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="git-clone" Feb 19 00:27:27 crc kubenswrapper[4889]: E0219 00:27:27.953347 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="registry-server" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953354 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="registry-server" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953472 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf98447e-d875-4d54-8aa5-ad99a6ef24dc" containerName="docker-build" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.953491 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c0628a-019f-4b2f-9c2d-8edf64d2475c" containerName="registry-server" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.954301 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.962868 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.963018 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.963345 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.964127 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Feb 19 00:27:27 crc kubenswrapper[4889]: I0219 00:27:27.988548 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027176 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027260 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027341 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027370 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027405 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027445 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027479 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027532 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027560 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qkl\" (UniqueName: \"kubernetes.io/projected/ff13d9ea-994a-4168-982e-113f5ca01778-kube-api-access-w5qkl\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027590 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027623 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.027657 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128548 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128630 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128663 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128680 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128698 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128721 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128757 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128794 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128841 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128856 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128871 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qkl\" (UniqueName: \"kubernetes.io/projected/ff13d9ea-994a-4168-982e-113f5ca01778-kube-api-access-w5qkl\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.128898 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.129120 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.129431 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.129450 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.129726 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.129827 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.129893 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.130013 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.130117 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.130196 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.135033 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.135539 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.148236 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qkl\" (UniqueName: \"kubernetes.io/projected/ff13d9ea-994a-4168-982e-113f5ca01778-kube-api-access-w5qkl\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.341613 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:28 crc kubenswrapper[4889]: I0219 00:27:28.564004 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:27:29 crc kubenswrapper[4889]: I0219 00:27:29.229364 4889 generic.go:334] "Generic (PLEG): container finished" podID="ff13d9ea-994a-4168-982e-113f5ca01778" containerID="bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69" exitCode=0 Feb 19 00:27:29 crc kubenswrapper[4889]: I0219 00:27:29.229487 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ff13d9ea-994a-4168-982e-113f5ca01778","Type":"ContainerDied","Data":"bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69"} Feb 19 00:27:29 crc kubenswrapper[4889]: I0219 00:27:29.229815 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ff13d9ea-994a-4168-982e-113f5ca01778","Type":"ContainerStarted","Data":"d567ffcbda640ab54ce399e4e48c8faa8f60b52298ce85e7fad1b480c2333246"} Feb 19 00:27:30 crc kubenswrapper[4889]: I0219 00:27:30.240304 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ff13d9ea-994a-4168-982e-113f5ca01778","Type":"ContainerStarted","Data":"69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b"} Feb 19 00:27:30 crc kubenswrapper[4889]: I0219 00:27:30.271502 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=3.271471338 podStartE2EDuration="3.271471338s" podCreationTimestamp="2026-02-19 00:27:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:27:30.266888071 +0000 UTC m=+1276.231553072" watchObservedRunningTime="2026-02-19 00:27:30.271471338 +0000 UTC m=+1276.236136329" Feb 19 00:27:38 crc kubenswrapper[4889]: I0219 00:27:38.702613 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:27:38 crc kubenswrapper[4889]: I0219 00:27:38.703827 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" containerName="docker-build" containerID="cri-o://69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b" gracePeriod=30 Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.209948 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ff13d9ea-994a-4168-982e-113f5ca01778/docker-build/0.log" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.210491 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.309643 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_ff13d9ea-994a-4168-982e-113f5ca01778/docker-build/0.log" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.310037 4889 generic.go:334] "Generic (PLEG): container finished" podID="ff13d9ea-994a-4168-982e-113f5ca01778" containerID="69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b" exitCode=1 Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.310090 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ff13d9ea-994a-4168-982e-113f5ca01778","Type":"ContainerDied","Data":"69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b"} Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.310116 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.310136 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"ff13d9ea-994a-4168-982e-113f5ca01778","Type":"ContainerDied","Data":"d567ffcbda640ab54ce399e4e48c8faa8f60b52298ce85e7fad1b480c2333246"} Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.310159 4889 scope.go:117] "RemoveContainer" containerID="69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.333476 4889 scope.go:117] "RemoveContainer" containerID="bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.363682 4889 scope.go:117] "RemoveContainer" containerID="69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b" Feb 19 00:27:39 crc kubenswrapper[4889]: E0219 00:27:39.364243 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b\": container with ID starting with 69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b not found: ID does not exist" containerID="69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.364291 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b"} err="failed to get container status \"69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b\": rpc error: code = NotFound desc = could not find container \"69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b\": container with ID starting with 69b4452a159131c070550c55b1b607f099c67f110383d30eba7294e1ed20030b not found: ID does not exist" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.364327 4889 scope.go:117] "RemoveContainer" containerID="bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69" Feb 19 00:27:39 crc kubenswrapper[4889]: E0219 00:27:39.364747 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69\": container with ID starting with bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69 not found: ID does not exist" containerID="bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.364837 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69"} err="failed to get container status \"bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69\": rpc error: code = NotFound desc = could not find container \"bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69\": container with ID starting with bd24c579fd6c77ad805af068ac3a1893098d1ab46f73882c803dbffec3c82e69 not found: ID does not exist" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400439 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-push\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400503 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-node-pullsecrets\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400537 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-buildcachedir\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400565 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-root\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400598 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-pull\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400682 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-system-configs\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400722 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-run\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400679 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400768 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-buildworkdir\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400763 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400803 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-ca-bundles\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400870 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-build-blob-cache\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400907 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-proxy-ca-bundles\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.400930 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5qkl\" (UniqueName: \"kubernetes.io/projected/ff13d9ea-994a-4168-982e-113f5ca01778-kube-api-access-w5qkl\") pod \"ff13d9ea-994a-4168-982e-113f5ca01778\" (UID: \"ff13d9ea-994a-4168-982e-113f5ca01778\") " Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.401207 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.401236 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/ff13d9ea-994a-4168-982e-113f5ca01778-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.401714 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.402773 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.402814 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.403264 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.403491 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.407990 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff13d9ea-994a-4168-982e-113f5ca01778-kube-api-access-w5qkl" (OuterVolumeSpecName: "kube-api-access-w5qkl") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "kube-api-access-w5qkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.408076 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.408140 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.474011 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503248 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503306 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503316 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503328 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5qkl\" (UniqueName: \"kubernetes.io/projected/ff13d9ea-994a-4168-982e-113f5ca01778-kube-api-access-w5qkl\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503338 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503349 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/ff13d9ea-994a-4168-982e-113f5ca01778-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503358 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/ff13d9ea-994a-4168-982e-113f5ca01778-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503368 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.503380 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.800645 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "ff13d9ea-994a-4168-982e-113f5ca01778" (UID: "ff13d9ea-994a-4168-982e-113f5ca01778"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.808647 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/ff13d9ea-994a-4168-982e-113f5ca01778-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.956469 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:27:39 crc kubenswrapper[4889]: I0219 00:27:39.963639 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.341478 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 19 00:27:40 crc kubenswrapper[4889]: E0219 00:27:40.342416 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" containerName="manage-dockerfile" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.342448 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" containerName="manage-dockerfile" Feb 19 00:27:40 crc kubenswrapper[4889]: E0219 00:27:40.342464 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" containerName="docker-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.342474 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" containerName="docker-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.342625 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" containerName="docker-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.346803 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.349587 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-9cxxm" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.349654 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.349595 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.349938 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.364121 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519180 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519253 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8kgd\" (UniqueName: \"kubernetes.io/projected/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-kube-api-access-p8kgd\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519425 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519550 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519616 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519648 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519779 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519851 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519913 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519947 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.519984 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.520031 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621316 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621396 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621432 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621453 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621515 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621555 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621609 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621638 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8kgd\" (UniqueName: \"kubernetes.io/projected/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-kube-api-access-p8kgd\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621687 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621721 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621753 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621779 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621869 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.621947 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.622005 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.622175 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.622370 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.622709 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.622699 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.622959 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.623346 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.626898 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.627187 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.641216 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8kgd\" (UniqueName: \"kubernetes.io/projected/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-kube-api-access-p8kgd\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.672540 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:27:40 crc kubenswrapper[4889]: I0219 00:27:40.735088 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff13d9ea-994a-4168-982e-113f5ca01778" path="/var/lib/kubelet/pods/ff13d9ea-994a-4168-982e-113f5ca01778/volumes" Feb 19 00:27:41 crc kubenswrapper[4889]: I0219 00:27:41.125568 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Feb 19 00:27:41 crc kubenswrapper[4889]: I0219 00:27:41.362946 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerStarted","Data":"fd52861509d76527337de2f0db0af9be86ba27f2717462cf78b3520467622ff8"} Feb 19 00:27:42 crc kubenswrapper[4889]: I0219 00:27:42.375756 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerStarted","Data":"0aa38e6cf82a29b53c58478902cf28cb6c3e2e5ba5a04da2228f3c81ff1d66d6"} Feb 19 00:27:43 crc kubenswrapper[4889]: I0219 00:27:43.384997 4889 generic.go:334] "Generic (PLEG): container finished" podID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerID="0aa38e6cf82a29b53c58478902cf28cb6c3e2e5ba5a04da2228f3c81ff1d66d6" exitCode=0 Feb 19 00:27:43 crc kubenswrapper[4889]: I0219 00:27:43.385117 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerDied","Data":"0aa38e6cf82a29b53c58478902cf28cb6c3e2e5ba5a04da2228f3c81ff1d66d6"} Feb 19 00:27:44 crc kubenswrapper[4889]: I0219 00:27:44.395995 4889 generic.go:334] "Generic (PLEG): container finished" podID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerID="8dc2675762dbe0407a4cb9993894d36fa6a6bc2f3228260a0fd52a08ac123e85" exitCode=0 Feb 19 00:27:44 crc kubenswrapper[4889]: I0219 00:27:44.396269 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerDied","Data":"8dc2675762dbe0407a4cb9993894d36fa6a6bc2f3228260a0fd52a08ac123e85"} Feb 19 00:27:44 crc kubenswrapper[4889]: I0219 00:27:44.440989 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_a25d0ab5-f3cd-4477-8272-2b08c0d16c47/manage-dockerfile/0.log" Feb 19 00:27:45 crc kubenswrapper[4889]: I0219 00:27:45.407255 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerStarted","Data":"957d74795cfb59a68ab79d45515ef1fc025d741ba2013c0e08d7cd0dc823d0f0"} Feb 19 00:27:45 crc kubenswrapper[4889]: I0219 00:27:45.441034 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.44101296 podStartE2EDuration="5.44101296s" podCreationTimestamp="2026-02-19 00:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:27:45.437619908 +0000 UTC m=+1291.402284909" watchObservedRunningTime="2026-02-19 00:27:45.44101296 +0000 UTC m=+1291.405677951" Feb 19 00:28:47 crc kubenswrapper[4889]: I0219 00:28:47.856730 4889 generic.go:334] "Generic (PLEG): container finished" podID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerID="957d74795cfb59a68ab79d45515ef1fc025d741ba2013c0e08d7cd0dc823d0f0" exitCode=0 Feb 19 00:28:47 crc kubenswrapper[4889]: I0219 00:28:47.856788 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerDied","Data":"957d74795cfb59a68ab79d45515ef1fc025d741ba2013c0e08d7cd0dc823d0f0"} Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.119863 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253386 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-blob-cache\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253449 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-run\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253496 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-pull\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253608 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-system-configs\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253634 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-node-pullsecrets\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253669 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8kgd\" (UniqueName: \"kubernetes.io/projected/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-kube-api-access-p8kgd\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253716 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-proxy-ca-bundles\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253771 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildworkdir\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253794 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-ca-bundles\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253820 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-root\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253846 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildcachedir\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.253874 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-push\") pod \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\" (UID: \"a25d0ab5-f3cd-4477-8272-2b08c0d16c47\") " Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.254006 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.254190 4889 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.254601 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.254672 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.254644 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.255541 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.255551 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.258489 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.261099 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-push" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-push") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "builder-dockercfg-9cxxm-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.261616 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-kube-api-access-p8kgd" (OuterVolumeSpecName: "kube-api-access-p8kgd") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "kube-api-access-p8kgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.265379 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-pull" (OuterVolumeSpecName: "builder-dockercfg-9cxxm-pull") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "builder-dockercfg-9cxxm-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355653 4889 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355718 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8kgd\" (UniqueName: \"kubernetes.io/projected/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-kube-api-access-p8kgd\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355739 4889 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355753 4889 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355767 4889 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355781 4889 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355795 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-push\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-push\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355810 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.355822 4889 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-9cxxm-pull\" (UniqueName: \"kubernetes.io/secret/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-builder-dockercfg-9cxxm-pull\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.371833 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.456855 4889 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.887941 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"a25d0ab5-f3cd-4477-8272-2b08c0d16c47","Type":"ContainerDied","Data":"fd52861509d76527337de2f0db0af9be86ba27f2717462cf78b3520467622ff8"} Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.888487 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd52861509d76527337de2f0db0af9be86ba27f2717462cf78b3520467622ff8" Feb 19 00:28:49 crc kubenswrapper[4889]: I0219 00:28:49.887990 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Feb 19 00:28:50 crc kubenswrapper[4889]: I0219 00:28:50.216258 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a25d0ab5-f3cd-4477-8272-2b08c0d16c47" (UID: "a25d0ab5-f3cd-4477-8272-2b08c0d16c47"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:28:50 crc kubenswrapper[4889]: I0219 00:28:50.270672 4889 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a25d0ab5-f3cd-4477-8272-2b08c0d16c47-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.921399 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws"] Feb 19 00:28:54 crc kubenswrapper[4889]: E0219 00:28:54.922738 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="docker-build" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.922765 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="docker-build" Feb 19 00:28:54 crc kubenswrapper[4889]: E0219 00:28:54.922779 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="git-clone" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.922786 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="git-clone" Feb 19 00:28:54 crc kubenswrapper[4889]: E0219 00:28:54.922799 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="manage-dockerfile" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.922806 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="manage-dockerfile" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.922985 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25d0ab5-f3cd-4477-8272-2b08c0d16c47" containerName="docker-build" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.923693 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.926461 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-mnjjx" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.931265 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws"] Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.939886 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ppk4\" (UniqueName: \"kubernetes.io/projected/d9ab18a7-ef5a-441e-825c-7b8cbba10d03-kube-api-access-6ppk4\") pod \"smart-gateway-operator-7954c5f85d-lz7ws\" (UID: \"d9ab18a7-ef5a-441e-825c-7b8cbba10d03\") " pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:54 crc kubenswrapper[4889]: I0219 00:28:54.940006 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/d9ab18a7-ef5a-441e-825c-7b8cbba10d03-runner\") pod \"smart-gateway-operator-7954c5f85d-lz7ws\" (UID: \"d9ab18a7-ef5a-441e-825c-7b8cbba10d03\") " pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.041774 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ppk4\" (UniqueName: \"kubernetes.io/projected/d9ab18a7-ef5a-441e-825c-7b8cbba10d03-kube-api-access-6ppk4\") pod \"smart-gateway-operator-7954c5f85d-lz7ws\" (UID: \"d9ab18a7-ef5a-441e-825c-7b8cbba10d03\") " pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.042208 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/d9ab18a7-ef5a-441e-825c-7b8cbba10d03-runner\") pod \"smart-gateway-operator-7954c5f85d-lz7ws\" (UID: \"d9ab18a7-ef5a-441e-825c-7b8cbba10d03\") " pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.042971 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/d9ab18a7-ef5a-441e-825c-7b8cbba10d03-runner\") pod \"smart-gateway-operator-7954c5f85d-lz7ws\" (UID: \"d9ab18a7-ef5a-441e-825c-7b8cbba10d03\") " pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.064425 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ppk4\" (UniqueName: \"kubernetes.io/projected/d9ab18a7-ef5a-441e-825c-7b8cbba10d03-kube-api-access-6ppk4\") pod \"smart-gateway-operator-7954c5f85d-lz7ws\" (UID: \"d9ab18a7-ef5a-441e-825c-7b8cbba10d03\") " pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.247388 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.476580 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws"] Feb 19 00:28:55 crc kubenswrapper[4889]: W0219 00:28:55.484331 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9ab18a7_ef5a_441e_825c_7b8cbba10d03.slice/crio-a2f8f36b2a8a8231c0b543d52075fac1cdcf87f2285006347a91b7cafe3f94e8 WatchSource:0}: Error finding container a2f8f36b2a8a8231c0b543d52075fac1cdcf87f2285006347a91b7cafe3f94e8: Status 404 returned error can't find the container with id a2f8f36b2a8a8231c0b543d52075fac1cdcf87f2285006347a91b7cafe3f94e8 Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.489457 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:28:55 crc kubenswrapper[4889]: I0219 00:28:55.927440 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" event={"ID":"d9ab18a7-ef5a-441e-825c-7b8cbba10d03","Type":"ContainerStarted","Data":"a2f8f36b2a8a8231c0b543d52075fac1cdcf87f2285006347a91b7cafe3f94e8"} Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.597280 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6f9547d677-x9srz"] Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.599439 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.606008 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-b9569" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.614120 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6f9547d677-x9srz"] Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.649690 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7d1b7264-6b0f-4fac-997e-ec0b9f69fd21-runner\") pod \"service-telemetry-operator-6f9547d677-x9srz\" (UID: \"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21\") " pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.650114 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4jt\" (UniqueName: \"kubernetes.io/projected/7d1b7264-6b0f-4fac-997e-ec0b9f69fd21-kube-api-access-zs4jt\") pod \"service-telemetry-operator-6f9547d677-x9srz\" (UID: \"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21\") " pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.756811 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4jt\" (UniqueName: \"kubernetes.io/projected/7d1b7264-6b0f-4fac-997e-ec0b9f69fd21-kube-api-access-zs4jt\") pod \"service-telemetry-operator-6f9547d677-x9srz\" (UID: \"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21\") " pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.757022 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7d1b7264-6b0f-4fac-997e-ec0b9f69fd21-runner\") pod \"service-telemetry-operator-6f9547d677-x9srz\" (UID: \"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21\") " pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.757626 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7d1b7264-6b0f-4fac-997e-ec0b9f69fd21-runner\") pod \"service-telemetry-operator-6f9547d677-x9srz\" (UID: \"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21\") " pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.782658 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4jt\" (UniqueName: \"kubernetes.io/projected/7d1b7264-6b0f-4fac-997e-ec0b9f69fd21-kube-api-access-zs4jt\") pod \"service-telemetry-operator-6f9547d677-x9srz\" (UID: \"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21\") " pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:00 crc kubenswrapper[4889]: I0219 00:29:00.923274 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" Feb 19 00:29:06 crc kubenswrapper[4889]: I0219 00:29:06.450013 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6f9547d677-x9srz"] Feb 19 00:29:09 crc kubenswrapper[4889]: W0219 00:29:09.389746 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d1b7264_6b0f_4fac_997e_ec0b9f69fd21.slice/crio-e1553d83837e7e0fa3af430cb36004c8aca53286919aec9e10db4764daae7e9a WatchSource:0}: Error finding container e1553d83837e7e0fa3af430cb36004c8aca53286919aec9e10db4764daae7e9a: Status 404 returned error can't find the container with id e1553d83837e7e0fa3af430cb36004c8aca53286919aec9e10db4764daae7e9a Feb 19 00:29:10 crc kubenswrapper[4889]: I0219 00:29:10.066392 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" event={"ID":"d9ab18a7-ef5a-441e-825c-7b8cbba10d03","Type":"ContainerStarted","Data":"1c2a3373e7b4169ca1fdafdf5191dba63fa958191fe689d00e09796f2cc455c0"} Feb 19 00:29:10 crc kubenswrapper[4889]: I0219 00:29:10.067565 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" event={"ID":"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21","Type":"ContainerStarted","Data":"e1553d83837e7e0fa3af430cb36004c8aca53286919aec9e10db4764daae7e9a"} Feb 19 00:29:10 crc kubenswrapper[4889]: I0219 00:29:10.088009 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-7954c5f85d-lz7ws" podStartSLOduration=1.7855050399999999 podStartE2EDuration="16.087994339s" podCreationTimestamp="2026-02-19 00:28:54 +0000 UTC" firstStartedPulling="2026-02-19 00:28:55.489182692 +0000 UTC m=+1361.453847683" lastFinishedPulling="2026-02-19 00:29:09.791671991 +0000 UTC m=+1375.756336982" observedRunningTime="2026-02-19 00:29:10.083813458 +0000 UTC m=+1376.048478449" watchObservedRunningTime="2026-02-19 00:29:10.087994339 +0000 UTC m=+1376.052659330" Feb 19 00:29:19 crc kubenswrapper[4889]: I0219 00:29:19.134115 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" event={"ID":"7d1b7264-6b0f-4fac-997e-ec0b9f69fd21","Type":"ContainerStarted","Data":"87982831f805640f4600bae217187b9c06e48148455ffd26b4130211923da8d2"} Feb 19 00:29:19 crc kubenswrapper[4889]: I0219 00:29:19.160253 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6f9547d677-x9srz" podStartSLOduration=10.373129685 podStartE2EDuration="19.16020051s" podCreationTimestamp="2026-02-19 00:29:00 +0000 UTC" firstStartedPulling="2026-02-19 00:29:09.393261742 +0000 UTC m=+1375.357926733" lastFinishedPulling="2026-02-19 00:29:18.180332567 +0000 UTC m=+1384.144997558" observedRunningTime="2026-02-19 00:29:19.153483768 +0000 UTC m=+1385.118148769" watchObservedRunningTime="2026-02-19 00:29:19.16020051 +0000 UTC m=+1385.124865501" Feb 19 00:29:37 crc kubenswrapper[4889]: I0219 00:29:37.782101 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:29:37 crc kubenswrapper[4889]: I0219 00:29:37.782961 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.895417 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z9hdq"] Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.898056 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.907704 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.907720 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.908285 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.908399 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-5vkzp" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.908484 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.909292 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.910136 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z9hdq"] Feb 19 00:29:39 crc kubenswrapper[4889]: I0219 00:29:39.911924 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083324 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b98np\" (UniqueName: \"kubernetes.io/projected/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-kube-api-access-b98np\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083495 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083554 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083603 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083667 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-config\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083733 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.083770 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-users\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.184852 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.184908 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.184946 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.184967 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-config\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.184993 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.185030 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-users\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.185088 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b98np\" (UniqueName: \"kubernetes.io/projected/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-kube-api-access-b98np\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.186086 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-config\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.192767 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.192803 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-users\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.192829 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.193479 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.198923 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.206978 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b98np\" (UniqueName: \"kubernetes.io/projected/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-kube-api-access-b98np\") pod \"default-interconnect-68864d46cb-z9hdq\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.229537 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:29:40 crc kubenswrapper[4889]: I0219 00:29:40.445551 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z9hdq"] Feb 19 00:29:40 crc kubenswrapper[4889]: W0219 00:29:40.453642 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3062ad9c_e9fc_45f7_a3ba_32d5af270bcf.slice/crio-70411f525495f9d7c7ad1fe354c79aa1447756793f7e192989a333af8591dd0d WatchSource:0}: Error finding container 70411f525495f9d7c7ad1fe354c79aa1447756793f7e192989a333af8591dd0d: Status 404 returned error can't find the container with id 70411f525495f9d7c7ad1fe354c79aa1447756793f7e192989a333af8591dd0d Feb 19 00:29:41 crc kubenswrapper[4889]: I0219 00:29:41.298975 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" event={"ID":"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf","Type":"ContainerStarted","Data":"70411f525495f9d7c7ad1fe354c79aa1447756793f7e192989a333af8591dd0d"} Feb 19 00:29:46 crc kubenswrapper[4889]: I0219 00:29:46.343426 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" event={"ID":"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf","Type":"ContainerStarted","Data":"7e2f973b4432e694bd0a22ff78dfff3ddeb7d1044c8a3a380d92d497394ce71e"} Feb 19 00:29:46 crc kubenswrapper[4889]: I0219 00:29:46.371592 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" podStartSLOduration=2.107718451 podStartE2EDuration="7.371540424s" podCreationTimestamp="2026-02-19 00:29:39 +0000 UTC" firstStartedPulling="2026-02-19 00:29:40.456867828 +0000 UTC m=+1406.421532819" lastFinishedPulling="2026-02-19 00:29:45.720689801 +0000 UTC m=+1411.685354792" observedRunningTime="2026-02-19 00:29:46.362832369 +0000 UTC m=+1412.327497370" watchObservedRunningTime="2026-02-19 00:29:46.371540424 +0000 UTC m=+1412.336205415" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.272768 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.275536 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.278163 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.278497 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.278585 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.278658 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.279322 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.279418 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.279433 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-cjbb4" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.279445 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.279509 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.280874 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.293476 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.375611 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.375702 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.375801 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.375982 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376083 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-af090266-9385-4c39-9cc2-0b647e17da7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af090266-9385-4c39-9cc2-0b647e17da7d\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376134 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376166 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-web-config\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376243 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-config-out\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376400 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376576 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-config\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376646 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-tls-assets\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.376699 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndsp\" (UniqueName: \"kubernetes.io/projected/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-kube-api-access-gndsp\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478675 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-af090266-9385-4c39-9cc2-0b647e17da7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af090266-9385-4c39-9cc2-0b647e17da7d\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478750 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478779 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-web-config\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478814 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-config-out\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478845 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478890 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-config\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478917 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-tls-assets\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478944 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndsp\" (UniqueName: \"kubernetes.io/projected/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-kube-api-access-gndsp\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.478997 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.479049 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: E0219 00:29:51.479083 4889 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:29:51 crc kubenswrapper[4889]: E0219 00:29:51.479200 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls podName:fb22c130-bdcd-4e77-8aea-a731d9d7fad7 nodeName:}" failed. No retries permitted until 2026-02-19 00:29:51.979174281 +0000 UTC m=+1417.943839272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "fb22c130-bdcd-4e77-8aea-a731d9d7fad7") : secret "default-prometheus-proxy-tls" not found Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.479089 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.479656 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.480761 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.480792 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.480993 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.481076 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.487866 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-config-out\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.488156 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-web-config\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.489056 4889 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.489168 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-af090266-9385-4c39-9cc2-0b647e17da7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af090266-9385-4c39-9cc2-0b647e17da7d\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b80cc96661253800f18a31ce4d60a7cdd099744114bc9349256873b9f2c9de39/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.490489 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-config\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.491776 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-tls-assets\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.514098 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.519209 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndsp\" (UniqueName: \"kubernetes.io/projected/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-kube-api-access-gndsp\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.519920 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-af090266-9385-4c39-9cc2-0b647e17da7d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af090266-9385-4c39-9cc2-0b647e17da7d\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: I0219 00:29:51.989352 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:51 crc kubenswrapper[4889]: E0219 00:29:51.989698 4889 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:29:51 crc kubenswrapper[4889]: E0219 00:29:51.989778 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls podName:fb22c130-bdcd-4e77-8aea-a731d9d7fad7 nodeName:}" failed. No retries permitted until 2026-02-19 00:29:52.989756878 +0000 UTC m=+1418.954421879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "fb22c130-bdcd-4e77-8aea-a731d9d7fad7") : secret "default-prometheus-proxy-tls" not found Feb 19 00:29:53 crc kubenswrapper[4889]: I0219 00:29:53.007487 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:53 crc kubenswrapper[4889]: I0219 00:29:53.019158 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/fb22c130-bdcd-4e77-8aea-a731d9d7fad7-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"fb22c130-bdcd-4e77-8aea-a731d9d7fad7\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:29:53 crc kubenswrapper[4889]: I0219 00:29:53.096214 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:29:53 crc kubenswrapper[4889]: I0219 00:29:53.422592 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:29:54 crc kubenswrapper[4889]: I0219 00:29:54.430465 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb22c130-bdcd-4e77-8aea-a731d9d7fad7","Type":"ContainerStarted","Data":"a5f62bb492724250d7c57586bd1fddeb7f0be68315bd4072b8cca294038a7053"} Feb 19 00:29:57 crc kubenswrapper[4889]: I0219 00:29:57.462022 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb22c130-bdcd-4e77-8aea-a731d9d7fad7","Type":"ContainerStarted","Data":"6dd4cec572939629ce76b4f8d0e02f277a0f321841ee4fc01be849b0370420a3"} Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.153682 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz"] Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.155570 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.158872 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.160265 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.165460 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz"] Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.243423 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df4b2fd-269c-46a1-b077-af3e8f468e4e-config-volume\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.243595 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbglb\" (UniqueName: \"kubernetes.io/projected/0df4b2fd-269c-46a1-b077-af3e8f468e4e-kube-api-access-tbglb\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.243676 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df4b2fd-269c-46a1-b077-af3e8f468e4e-secret-volume\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.345385 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbglb\" (UniqueName: \"kubernetes.io/projected/0df4b2fd-269c-46a1-b077-af3e8f468e4e-kube-api-access-tbglb\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.345500 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df4b2fd-269c-46a1-b077-af3e8f468e4e-secret-volume\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.345575 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df4b2fd-269c-46a1-b077-af3e8f468e4e-config-volume\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.346605 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df4b2fd-269c-46a1-b077-af3e8f468e4e-config-volume\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.353781 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df4b2fd-269c-46a1-b077-af3e8f468e4e-secret-volume\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.365777 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbglb\" (UniqueName: \"kubernetes.io/projected/0df4b2fd-269c-46a1-b077-af3e8f468e4e-kube-api-access-tbglb\") pod \"collect-profiles-29524350-rmsrz\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.521667 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:00 crc kubenswrapper[4889]: I0219 00:30:00.745745 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz"] Feb 19 00:30:00 crc kubenswrapper[4889]: W0219 00:30:00.751397 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df4b2fd_269c_46a1_b077_af3e8f468e4e.slice/crio-593a23e89d454a4c26980979c9a91a522421796c58f857579a55311ee29af17f WatchSource:0}: Error finding container 593a23e89d454a4c26980979c9a91a522421796c58f857579a55311ee29af17f: Status 404 returned error can't find the container with id 593a23e89d454a4c26980979c9a91a522421796c58f857579a55311ee29af17f Feb 19 00:30:01 crc kubenswrapper[4889]: I0219 00:30:01.493621 4889 generic.go:334] "Generic (PLEG): container finished" podID="0df4b2fd-269c-46a1-b077-af3e8f468e4e" containerID="c4736adebd73c9ae71d94a9e11c79c15496c80df5b739f4800fbd2c1473c97c8" exitCode=0 Feb 19 00:30:01 crc kubenswrapper[4889]: I0219 00:30:01.493709 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" event={"ID":"0df4b2fd-269c-46a1-b077-af3e8f468e4e","Type":"ContainerDied","Data":"c4736adebd73c9ae71d94a9e11c79c15496c80df5b739f4800fbd2c1473c97c8"} Feb 19 00:30:01 crc kubenswrapper[4889]: I0219 00:30:01.494139 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" event={"ID":"0df4b2fd-269c-46a1-b077-af3e8f468e4e","Type":"ContainerStarted","Data":"593a23e89d454a4c26980979c9a91a522421796c58f857579a55311ee29af17f"} Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.584419 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-kk4m4"] Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.585544 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.596214 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-kk4m4"] Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.690639 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw556\" (UniqueName: \"kubernetes.io/projected/62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1-kube-api-access-nw556\") pod \"default-snmp-webhook-6856cfb745-kk4m4\" (UID: \"62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.775134 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.796021 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw556\" (UniqueName: \"kubernetes.io/projected/62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1-kube-api-access-nw556\") pod \"default-snmp-webhook-6856cfb745-kk4m4\" (UID: \"62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.822384 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw556\" (UniqueName: \"kubernetes.io/projected/62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1-kube-api-access-nw556\") pod \"default-snmp-webhook-6856cfb745-kk4m4\" (UID: \"62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.897420 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df4b2fd-269c-46a1-b077-af3e8f468e4e-config-volume\") pod \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.897527 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df4b2fd-269c-46a1-b077-af3e8f468e4e-secret-volume\") pod \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.897783 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbglb\" (UniqueName: \"kubernetes.io/projected/0df4b2fd-269c-46a1-b077-af3e8f468e4e-kube-api-access-tbglb\") pod \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\" (UID: \"0df4b2fd-269c-46a1-b077-af3e8f468e4e\") " Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.898814 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df4b2fd-269c-46a1-b077-af3e8f468e4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "0df4b2fd-269c-46a1-b077-af3e8f468e4e" (UID: "0df4b2fd-269c-46a1-b077-af3e8f468e4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.903443 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df4b2fd-269c-46a1-b077-af3e8f468e4e-kube-api-access-tbglb" (OuterVolumeSpecName: "kube-api-access-tbglb") pod "0df4b2fd-269c-46a1-b077-af3e8f468e4e" (UID: "0df4b2fd-269c-46a1-b077-af3e8f468e4e"). InnerVolumeSpecName "kube-api-access-tbglb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.903464 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df4b2fd-269c-46a1-b077-af3e8f468e4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0df4b2fd-269c-46a1-b077-af3e8f468e4e" (UID: "0df4b2fd-269c-46a1-b077-af3e8f468e4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:30:02 crc kubenswrapper[4889]: I0219 00:30:02.915787 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:02.999517 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbglb\" (UniqueName: \"kubernetes.io/projected/0df4b2fd-269c-46a1-b077-af3e8f468e4e-kube-api-access-tbglb\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:02.999559 4889 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0df4b2fd-269c-46a1-b077-af3e8f468e4e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:02.999575 4889 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0df4b2fd-269c-46a1-b077-af3e8f468e4e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:03.340417 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-kk4m4"] Feb 19 00:30:03 crc kubenswrapper[4889]: W0219 00:30:03.347053 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f71d34_c015_4b2c_ab1c_6ae0b8a5e0a1.slice/crio-919ef92f38a09fa87b830627fb00288183fc162764b808f37913d95976f5550b WatchSource:0}: Error finding container 919ef92f38a09fa87b830627fb00288183fc162764b808f37913d95976f5550b: Status 404 returned error can't find the container with id 919ef92f38a09fa87b830627fb00288183fc162764b808f37913d95976f5550b Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:03.517567 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:03.517554 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-rmsrz" event={"ID":"0df4b2fd-269c-46a1-b077-af3e8f468e4e","Type":"ContainerDied","Data":"593a23e89d454a4c26980979c9a91a522421796c58f857579a55311ee29af17f"} Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:03.518183 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="593a23e89d454a4c26980979c9a91a522421796c58f857579a55311ee29af17f" Feb 19 00:30:03 crc kubenswrapper[4889]: I0219 00:30:03.519850 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" event={"ID":"62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1","Type":"ContainerStarted","Data":"919ef92f38a09fa87b830627fb00288183fc162764b808f37913d95976f5550b"} Feb 19 00:30:05 crc kubenswrapper[4889]: I0219 00:30:05.536099 4889 generic.go:334] "Generic (PLEG): container finished" podID="fb22c130-bdcd-4e77-8aea-a731d9d7fad7" containerID="6dd4cec572939629ce76b4f8d0e02f277a0f321841ee4fc01be849b0370420a3" exitCode=0 Feb 19 00:30:05 crc kubenswrapper[4889]: I0219 00:30:05.536200 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb22c130-bdcd-4e77-8aea-a731d9d7fad7","Type":"ContainerDied","Data":"6dd4cec572939629ce76b4f8d0e02f277a0f321841ee4fc01be849b0370420a3"} Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.931193 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:30:06 crc kubenswrapper[4889]: E0219 00:30:06.931660 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df4b2fd-269c-46a1-b077-af3e8f468e4e" containerName="collect-profiles" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.931676 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df4b2fd-269c-46a1-b077-af3e8f468e4e" containerName="collect-profiles" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.931868 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df4b2fd-269c-46a1-b077-af3e8f468e4e" containerName="collect-profiles" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.933549 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.939040 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.939340 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-d6795" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.939490 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.941997 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.942202 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Feb 19 00:30:06 crc kubenswrapper[4889]: I0219 00:30:06.942056 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.085569 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g59ph\" (UniqueName: \"kubernetes.io/projected/0068ff27-e035-4aac-ada3-5f46690fb384-kube-api-access-g59ph\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.085667 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0068ff27-e035-4aac-ada3-5f46690fb384-tls-assets\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.085790 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-config-volume\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.085890 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-555393ef-8085-4eb0-b29b-9aee805714f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-555393ef-8085-4eb0-b29b-9aee805714f2\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.085972 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-web-config\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.086058 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.086116 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.086156 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.086344 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0068ff27-e035-4aac-ada3-5f46690fb384-config-out\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188500 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0068ff27-e035-4aac-ada3-5f46690fb384-tls-assets\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188580 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-config-volume\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188609 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-555393ef-8085-4eb0-b29b-9aee805714f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-555393ef-8085-4eb0-b29b-9aee805714f2\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188650 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-web-config\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188690 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188719 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188738 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188789 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0068ff27-e035-4aac-ada3-5f46690fb384-config-out\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.188813 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g59ph\" (UniqueName: \"kubernetes.io/projected/0068ff27-e035-4aac-ada3-5f46690fb384-kube-api-access-g59ph\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: E0219 00:30:07.195658 4889 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:07 crc kubenswrapper[4889]: E0219 00:30:07.198215 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls podName:0068ff27-e035-4aac-ada3-5f46690fb384 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:07.698184626 +0000 UTC m=+1433.662849617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0068ff27-e035-4aac-ada3-5f46690fb384") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.199662 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-web-config\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.199690 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-config-volume\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.212031 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.212068 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.212068 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0068ff27-e035-4aac-ada3-5f46690fb384-tls-assets\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.214851 4889 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.214894 4889 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-555393ef-8085-4eb0-b29b-9aee805714f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-555393ef-8085-4eb0-b29b-9aee805714f2\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/12bf14aaedc857b8cdcd1cc41d69c983e19b6004f7ab54710c16511e82ee04e1/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.214923 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0068ff27-e035-4aac-ada3-5f46690fb384-config-out\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.217334 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g59ph\" (UniqueName: \"kubernetes.io/projected/0068ff27-e035-4aac-ada3-5f46690fb384-kube-api-access-g59ph\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.250532 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-555393ef-8085-4eb0-b29b-9aee805714f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-555393ef-8085-4eb0-b29b-9aee805714f2\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.359536 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.698604 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:07 crc kubenswrapper[4889]: E0219 00:30:07.699095 4889 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:07 crc kubenswrapper[4889]: E0219 00:30:07.699305 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls podName:0068ff27-e035-4aac-ada3-5f46690fb384 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:08.699178942 +0000 UTC m=+1434.663843933 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0068ff27-e035-4aac-ada3-5f46690fb384") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.781626 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:30:07 crc kubenswrapper[4889]: I0219 00:30:07.781719 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:30:08 crc kubenswrapper[4889]: I0219 00:30:08.716255 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:08 crc kubenswrapper[4889]: E0219 00:30:08.716478 4889 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:08 crc kubenswrapper[4889]: E0219 00:30:08.716565 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls podName:0068ff27-e035-4aac-ada3-5f46690fb384 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:10.716547927 +0000 UTC m=+1436.681212918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0068ff27-e035-4aac-ada3-5f46690fb384") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:10 crc kubenswrapper[4889]: I0219 00:30:10.801827 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:10 crc kubenswrapper[4889]: E0219 00:30:10.802115 4889 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:10 crc kubenswrapper[4889]: E0219 00:30:10.802500 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls podName:0068ff27-e035-4aac-ada3-5f46690fb384 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:14.802456108 +0000 UTC m=+1440.767121109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "0068ff27-e035-4aac-ada3-5f46690fb384") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:30:14 crc kubenswrapper[4889]: I0219 00:30:14.893291 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:14 crc kubenswrapper[4889]: I0219 00:30:14.902279 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0068ff27-e035-4aac-ada3-5f46690fb384-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"0068ff27-e035-4aac-ada3-5f46690fb384\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:15 crc kubenswrapper[4889]: I0219 00:30:15.078084 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-d6795" Feb 19 00:30:15 crc kubenswrapper[4889]: I0219 00:30:15.085160 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:30:17 crc kubenswrapper[4889]: E0219 00:30:17.148336 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest" Feb 19 00:30:17 crc kubenswrapper[4889]: E0219 00:30:17.149960 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-webhook-snmp,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:9099,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:SNMP_COMMUNITY,Value:public,ValueFrom:nil,},EnvVar{Name:SNMP_RETRIES,Value:5,ValueFrom:nil,},EnvVar{Name:SNMP_HOST,Value:192.168.24.254,ValueFrom:nil,},EnvVar{Name:SNMP_PORT,Value:162,ValueFrom:nil,},EnvVar{Name:SNMP_TIMEOUT,Value:1,ValueFrom:nil,},EnvVar{Name:ALERT_OID_LABEL,Value:oid,ValueFrom:nil,},EnvVar{Name:TRAP_OID_PREFIX,Value:1.3.6.1.4.1.50495.15,ValueFrom:nil,},EnvVar{Name:TRAP_DEFAULT_OID,Value:1.3.6.1.4.1.50495.15.1.2.1,ValueFrom:nil,},EnvVar{Name:TRAP_DEFAULT_SEVERITY,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nw556,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod default-snmp-webhook-6856cfb745-kk4m4_service-telemetry(62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 00:30:17 crc kubenswrapper[4889]: E0219 00:30:17.151164 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-webhook-snmp\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" podUID="62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1" Feb 19 00:30:17 crc kubenswrapper[4889]: I0219 00:30:17.554609 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:30:17 crc kubenswrapper[4889]: I0219 00:30:17.693296 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0068ff27-e035-4aac-ada3-5f46690fb384","Type":"ContainerStarted","Data":"af81e28753b0e00a33a9db338a3cb424cca85fc9a3de6ced4ba014bc7014ef0f"} Feb 19 00:30:17 crc kubenswrapper[4889]: E0219 00:30:17.697972 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-webhook-snmp\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/prometheus-webhook-snmp:latest\\\"\"" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" podUID="62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1" Feb 19 00:30:20 crc kubenswrapper[4889]: I0219 00:30:20.734713 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0068ff27-e035-4aac-ada3-5f46690fb384","Type":"ContainerStarted","Data":"4076887f15a3319ef67d741f9e31a01e8e1126e4c47e4c2305a9be4c3f0bbed0"} Feb 19 00:30:25 crc kubenswrapper[4889]: I0219 00:30:25.129420 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb22c130-bdcd-4e77-8aea-a731d9d7fad7","Type":"ContainerStarted","Data":"3ca3283a3bb1fcac7d30d575bdf5abfa30d8daa1de75e7facdfad101b61400bf"} Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.604619 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf"] Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.607154 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.611715 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.612934 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.613040 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.613317 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-czt6b" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.628845 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf"] Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.672438 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65375c3-144f-4b4a-939c-dfd5d5396469-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.672611 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.672649 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f65375c3-144f-4b4a-939c-dfd5d5396469-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.672686 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jprct\" (UniqueName: \"kubernetes.io/projected/f65375c3-144f-4b4a-939c-dfd5d5396469-kube-api-access-jprct\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.672723 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.774588 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.775002 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f65375c3-144f-4b4a-939c-dfd5d5396469-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.775103 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jprct\" (UniqueName: \"kubernetes.io/projected/f65375c3-144f-4b4a-939c-dfd5d5396469-kube-api-access-jprct\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.775199 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.775333 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65375c3-144f-4b4a-939c-dfd5d5396469-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: E0219 00:30:26.774848 4889 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:30:26 crc kubenswrapper[4889]: E0219 00:30:26.775690 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls podName:f65375c3-144f-4b4a-939c-dfd5d5396469 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:27.275647638 +0000 UTC m=+1453.240312629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" (UID: "f65375c3-144f-4b4a-939c-dfd5d5396469") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.775885 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65375c3-144f-4b4a-939c-dfd5d5396469-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.776526 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f65375c3-144f-4b4a-939c-dfd5d5396469-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.786901 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:26 crc kubenswrapper[4889]: I0219 00:30:26.799024 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jprct\" (UniqueName: \"kubernetes.io/projected/f65375c3-144f-4b4a-939c-dfd5d5396469-kube-api-access-jprct\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:27 crc kubenswrapper[4889]: I0219 00:30:27.150407 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb22c130-bdcd-4e77-8aea-a731d9d7fad7","Type":"ContainerStarted","Data":"ecb600aec10913ada414f0285c33669df7d6a9c29c1e117725d97eede14cb64d"} Feb 19 00:30:27 crc kubenswrapper[4889]: I0219 00:30:27.283462 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:27 crc kubenswrapper[4889]: E0219 00:30:27.283749 4889 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:30:27 crc kubenswrapper[4889]: E0219 00:30:27.283865 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls podName:f65375c3-144f-4b4a-939c-dfd5d5396469 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:28.283835309 +0000 UTC m=+1454.248500300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" (UID: "f65375c3-144f-4b4a-939c-dfd5d5396469") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:30:28 crc kubenswrapper[4889]: I0219 00:30:28.305599 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:28 crc kubenswrapper[4889]: I0219 00:30:28.330216 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f65375c3-144f-4b4a-939c-dfd5d5396469-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf\" (UID: \"f65375c3-144f-4b4a-939c-dfd5d5396469\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:28 crc kubenswrapper[4889]: I0219 00:30:28.441973 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" Feb 19 00:30:28 crc kubenswrapper[4889]: I0219 00:30:28.776160 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf"] Feb 19 00:30:28 crc kubenswrapper[4889]: W0219 00:30:28.790778 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf65375c3_144f_4b4a_939c_dfd5d5396469.slice/crio-6360f244e9b4415f54b0b66c286d414302f5753dc555a3dcd6dc28489fc6c299 WatchSource:0}: Error finding container 6360f244e9b4415f54b0b66c286d414302f5753dc555a3dcd6dc28489fc6c299: Status 404 returned error can't find the container with id 6360f244e9b4415f54b0b66c286d414302f5753dc555a3dcd6dc28489fc6c299 Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.169441 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerStarted","Data":"6360f244e9b4415f54b0b66c286d414302f5753dc555a3dcd6dc28489fc6c299"} Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.478134 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg"] Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.480151 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.482580 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.487109 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.493174 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg"] Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.531787 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a81a057f-0e52-490e-89a9-83d27b389e0c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.532491 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.532573 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a81a057f-0e52-490e-89a9-83d27b389e0c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.532782 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.532916 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwcqt\" (UniqueName: \"kubernetes.io/projected/a81a057f-0e52-490e-89a9-83d27b389e0c-kube-api-access-xwcqt\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.634541 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwcqt\" (UniqueName: \"kubernetes.io/projected/a81a057f-0e52-490e-89a9-83d27b389e0c-kube-api-access-xwcqt\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.634649 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a81a057f-0e52-490e-89a9-83d27b389e0c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.634678 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.634719 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a81a057f-0e52-490e-89a9-83d27b389e0c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.634756 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: E0219 00:30:29.634911 4889 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:30:29 crc kubenswrapper[4889]: E0219 00:30:29.634985 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls podName:a81a057f-0e52-490e-89a9-83d27b389e0c nodeName:}" failed. No retries permitted until 2026-02-19 00:30:30.134959883 +0000 UTC m=+1456.099624874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" (UID: "a81a057f-0e52-490e-89a9-83d27b389e0c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.635906 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/a81a057f-0e52-490e-89a9-83d27b389e0c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.636205 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/a81a057f-0e52-490e-89a9-83d27b389e0c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.643740 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:29 crc kubenswrapper[4889]: I0219 00:30:29.664104 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwcqt\" (UniqueName: \"kubernetes.io/projected/a81a057f-0e52-490e-89a9-83d27b389e0c-kube-api-access-xwcqt\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:30 crc kubenswrapper[4889]: I0219 00:30:30.144899 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:30 crc kubenswrapper[4889]: E0219 00:30:30.145783 4889 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:30:30 crc kubenswrapper[4889]: E0219 00:30:30.145860 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls podName:a81a057f-0e52-490e-89a9-83d27b389e0c nodeName:}" failed. No retries permitted until 2026-02-19 00:30:31.14583981 +0000 UTC m=+1457.110504801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" (UID: "a81a057f-0e52-490e-89a9-83d27b389e0c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:30:30 crc kubenswrapper[4889]: I0219 00:30:30.180839 4889 generic.go:334] "Generic (PLEG): container finished" podID="0068ff27-e035-4aac-ada3-5f46690fb384" containerID="4076887f15a3319ef67d741f9e31a01e8e1126e4c47e4c2305a9be4c3f0bbed0" exitCode=0 Feb 19 00:30:30 crc kubenswrapper[4889]: I0219 00:30:30.180893 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0068ff27-e035-4aac-ada3-5f46690fb384","Type":"ContainerDied","Data":"4076887f15a3319ef67d741f9e31a01e8e1126e4c47e4c2305a9be4c3f0bbed0"} Feb 19 00:30:31 crc kubenswrapper[4889]: I0219 00:30:31.177187 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:31 crc kubenswrapper[4889]: I0219 00:30:31.203592 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/a81a057f-0e52-490e-89a9-83d27b389e0c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg\" (UID: \"a81a057f-0e52-490e-89a9-83d27b389e0c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:31 crc kubenswrapper[4889]: I0219 00:30:31.344425 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.643759 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr"] Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.656766 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.658370 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr"] Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.662601 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.664395 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.834876 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.835196 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.835495 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.835561 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772sj\" (UniqueName: \"kubernetes.io/projected/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-kube-api-access-772sj\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.835754 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.937751 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.937824 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.937883 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.937908 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772sj\" (UniqueName: \"kubernetes.io/projected/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-kube-api-access-772sj\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.937950 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: E0219 00:30:33.938131 4889 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.938255 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: E0219 00:30:33.938267 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls podName:65c0d350-5f06-41d2-bdb0-ec37cf8d42f6 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:34.438240653 +0000 UTC m=+1460.402905644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" (UID: "65c0d350-5f06-41d2-bdb0-ec37cf8d42f6") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.938801 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.946864 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:33 crc kubenswrapper[4889]: I0219 00:30:33.962714 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772sj\" (UniqueName: \"kubernetes.io/projected/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-kube-api-access-772sj\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:34 crc kubenswrapper[4889]: I0219 00:30:34.447951 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:34 crc kubenswrapper[4889]: E0219 00:30:34.448506 4889 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:30:34 crc kubenswrapper[4889]: E0219 00:30:34.448650 4889 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls podName:65c0d350-5f06-41d2-bdb0-ec37cf8d42f6 nodeName:}" failed. No retries permitted until 2026-02-19 00:30:35.448630425 +0000 UTC m=+1461.413295416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" (UID: "65c0d350-5f06-41d2-bdb0-ec37cf8d42f6") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:30:35 crc kubenswrapper[4889]: I0219 00:30:35.514343 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:35 crc kubenswrapper[4889]: I0219 00:30:35.522125 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/65c0d350-5f06-41d2-bdb0-ec37cf8d42f6-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr\" (UID: \"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:35 crc kubenswrapper[4889]: I0219 00:30:35.785666 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" Feb 19 00:30:37 crc kubenswrapper[4889]: I0219 00:30:37.781772 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:30:37 crc kubenswrapper[4889]: I0219 00:30:37.782719 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:30:37 crc kubenswrapper[4889]: I0219 00:30:37.782782 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:30:37 crc kubenswrapper[4889]: I0219 00:30:37.783610 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40c89b17b39712776e841fba6d4f65f04b05a7818e694b7b66336b8111d61491"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:30:37 crc kubenswrapper[4889]: I0219 00:30:37.783692 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://40c89b17b39712776e841fba6d4f65f04b05a7818e694b7b66336b8111d61491" gracePeriod=600 Feb 19 00:30:38 crc kubenswrapper[4889]: I0219 00:30:38.265327 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="40c89b17b39712776e841fba6d4f65f04b05a7818e694b7b66336b8111d61491" exitCode=0 Feb 19 00:30:38 crc kubenswrapper[4889]: I0219 00:30:38.265399 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"40c89b17b39712776e841fba6d4f65f04b05a7818e694b7b66336b8111d61491"} Feb 19 00:30:38 crc kubenswrapper[4889]: I0219 00:30:38.265511 4889 scope.go:117] "RemoveContainer" containerID="a422434d7e3c6cbd938447bafd74a7de72de7d70515b5ce97089afc1e4e805bb" Feb 19 00:30:38 crc kubenswrapper[4889]: I0219 00:30:38.765981 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg"] Feb 19 00:30:38 crc kubenswrapper[4889]: W0219 00:30:38.885324 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda81a057f_0e52_490e_89a9_83d27b389e0c.slice/crio-2095eda0159ac2b3f9c27ba4e7f5fc6bd9741f156ead0035abf1c8a69adebe00 WatchSource:0}: Error finding container 2095eda0159ac2b3f9c27ba4e7f5fc6bd9741f156ead0035abf1c8a69adebe00: Status 404 returned error can't find the container with id 2095eda0159ac2b3f9c27ba4e7f5fc6bd9741f156ead0035abf1c8a69adebe00 Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.038989 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr"] Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.282636 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587"} Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.306306 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"fb22c130-bdcd-4e77-8aea-a731d9d7fad7","Type":"ContainerStarted","Data":"c1a946b4cae32293c8af229816440cd714eba35ab8ef956f357b7e49c7a37149"} Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.317170 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerStarted","Data":"67286213ab057de28831d9a0253b481bfd55ebc2488ba30eca90c01999862452"} Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.324664 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" event={"ID":"62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1","Type":"ContainerStarted","Data":"8f256b38e12bc5134329b4e19311db8c82aa3bf5a92c03f1af9f8af798d51a2e"} Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.330493 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerStarted","Data":"da263b40f9b84cec385bded1d389b6bd4cd9a3080a219c43171875bb2937bed8"} Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.337527 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerStarted","Data":"2095eda0159ac2b3f9c27ba4e7f5fc6bd9741f156ead0035abf1c8a69adebe00"} Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.370236 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=3.899114535 podStartE2EDuration="49.37019662s" podCreationTimestamp="2026-02-19 00:29:50 +0000 UTC" firstStartedPulling="2026-02-19 00:29:53.412519234 +0000 UTC m=+1419.377184235" lastFinishedPulling="2026-02-19 00:30:38.883601329 +0000 UTC m=+1464.848266320" observedRunningTime="2026-02-19 00:30:39.362114305 +0000 UTC m=+1465.326779306" watchObservedRunningTime="2026-02-19 00:30:39.37019662 +0000 UTC m=+1465.334861611" Feb 19 00:30:39 crc kubenswrapper[4889]: I0219 00:30:39.390011 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-kk4m4" podStartSLOduration=3.323666648 podStartE2EDuration="37.389973694s" podCreationTimestamp="2026-02-19 00:30:02 +0000 UTC" firstStartedPulling="2026-02-19 00:30:03.350302342 +0000 UTC m=+1429.314967343" lastFinishedPulling="2026-02-19 00:30:37.416609398 +0000 UTC m=+1463.381274389" observedRunningTime="2026-02-19 00:30:39.385078099 +0000 UTC m=+1465.349743100" watchObservedRunningTime="2026-02-19 00:30:39.389973694 +0000 UTC m=+1465.354638695" Feb 19 00:30:41 crc kubenswrapper[4889]: I0219 00:30:41.366742 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerStarted","Data":"b2cb4feec68e3035fbfd7d55687e8ed2c04f8c0d60c9b19b90b4fe51a9192cbd"} Feb 19 00:30:43 crc kubenswrapper[4889]: I0219 00:30:43.096508 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Feb 19 00:30:43 crc kubenswrapper[4889]: I0219 00:30:43.386360 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerStarted","Data":"9677b0cce782d266db3f01a147fe5062485351516168ca15ac455a11e1c9e344"} Feb 19 00:30:43 crc kubenswrapper[4889]: I0219 00:30:43.386413 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerStarted","Data":"0f7c547d05efbaaaae6c4e647697502f949378c079a0baa5e096efe1bb886add"} Feb 19 00:30:43 crc kubenswrapper[4889]: I0219 00:30:43.388572 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0068ff27-e035-4aac-ada3-5f46690fb384","Type":"ContainerStarted","Data":"303cb6ba4e541ede1606e9bcf4a118414a2a448c329275359d372a99342eecdf"} Feb 19 00:30:43 crc kubenswrapper[4889]: I0219 00:30:43.390984 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerStarted","Data":"4cb5f53175fc5efe76228a040f06f8dca41ee0a3f11ac115cf9a6e01f16a426b"} Feb 19 00:30:43 crc kubenswrapper[4889]: I0219 00:30:43.393255 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerStarted","Data":"f08eceec03e52711d1cf4a0a449c1f051db0a1c07dda0c9b8c7c8d986e891da6"} Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.352063 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m"] Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.353809 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.357549 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.357549 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.365881 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m"] Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.461452 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.461530 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.461565 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.461710 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rqw8\" (UniqueName: \"kubernetes.io/projected/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-kube-api-access-7rqw8\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.565869 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.565973 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.566031 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.566100 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rqw8\" (UniqueName: \"kubernetes.io/projected/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-kube-api-access-7rqw8\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.567908 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.569161 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.574498 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.595081 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rqw8\" (UniqueName: \"kubernetes.io/projected/6192d5a7-5f82-4bc6-9a05-7044958b5ce2-kube-api-access-7rqw8\") pod \"default-cloud1-coll-event-smartgateway-96874f644-5nw5m\" (UID: \"6192d5a7-5f82-4bc6-9a05-7044958b5ce2\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:44 crc kubenswrapper[4889]: I0219 00:30:44.692683 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.201349 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6"] Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.203247 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.205748 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.239047 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6"] Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.281650 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fhn\" (UniqueName: \"kubernetes.io/projected/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-kube-api-access-h2fhn\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.281789 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.281857 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.281904 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.383848 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2fhn\" (UniqueName: \"kubernetes.io/projected/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-kube-api-access-h2fhn\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.383910 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.383944 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.383983 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.384934 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.388635 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:45 crc kubenswrapper[4889]: I0219 00:30:45.401078 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:46 crc kubenswrapper[4889]: I0219 00:30:46.094192 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2fhn\" (UniqueName: \"kubernetes.io/projected/6352877d-bf5a-4b08-9fb0-f426fe70cdd6-kube-api-access-h2fhn\") pod \"default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6\" (UID: \"6352877d-bf5a-4b08-9fb0-f426fe70cdd6\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:46 crc kubenswrapper[4889]: I0219 00:30:46.095908 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" Feb 19 00:30:49 crc kubenswrapper[4889]: I0219 00:30:49.680769 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6"] Feb 19 00:30:49 crc kubenswrapper[4889]: W0219 00:30:49.690668 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6352877d_bf5a_4b08_9fb0_f426fe70cdd6.slice/crio-e97c366ffe725c0386ff06f55fdf02cf8ae5de90a4f9f0176ff62b3a6a901df5 WatchSource:0}: Error finding container e97c366ffe725c0386ff06f55fdf02cf8ae5de90a4f9f0176ff62b3a6a901df5: Status 404 returned error can't find the container with id e97c366ffe725c0386ff06f55fdf02cf8ae5de90a4f9f0176ff62b3a6a901df5 Feb 19 00:30:49 crc kubenswrapper[4889]: I0219 00:30:49.882892 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m"] Feb 19 00:30:49 crc kubenswrapper[4889]: W0219 00:30:49.892853 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6192d5a7_5f82_4bc6_9a05_7044958b5ce2.slice/crio-529ae311e0c058972590c850ec5a3b580b7656f1326aed923236d5c6aa197f80 WatchSource:0}: Error finding container 529ae311e0c058972590c850ec5a3b580b7656f1326aed923236d5c6aa197f80: Status 404 returned error can't find the container with id 529ae311e0c058972590c850ec5a3b580b7656f1326aed923236d5c6aa197f80 Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.178791 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerStarted","Data":"4efa893d3485d095ba4a9203fa78b241cdda9096afb953662636a547bc0b664f"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.178848 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerStarted","Data":"529ae311e0c058972590c850ec5a3b580b7656f1326aed923236d5c6aa197f80"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.182276 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerStarted","Data":"fc1bfa28b1b0b979e5122c861caaff609dfce5ddf7cc7cd41bec66a777deb36c"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.187054 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerStarted","Data":"c3952707fa5405b6be04b1890152277ed92a52b7be6d53ae1cb166e6757d48c7"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.187091 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerStarted","Data":"e97c366ffe725c0386ff06f55fdf02cf8ae5de90a4f9f0176ff62b3a6a901df5"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.198819 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerStarted","Data":"4c37390522d2f9556f8122044f25cefd449bb3667645cf233d9a2e90e6d98a09"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.207925 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerStarted","Data":"068089628bc2c3f4fbfd135ef6acca94287a248818fcabf9571373613442e2af"} Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.207994 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" podStartSLOduration=3.568577131 podStartE2EDuration="24.207969341s" podCreationTimestamp="2026-02-19 00:30:26 +0000 UTC" firstStartedPulling="2026-02-19 00:30:28.793873539 +0000 UTC m=+1454.758538530" lastFinishedPulling="2026-02-19 00:30:49.433265749 +0000 UTC m=+1475.397930740" observedRunningTime="2026-02-19 00:30:50.206981769 +0000 UTC m=+1476.171646760" watchObservedRunningTime="2026-02-19 00:30:50.207969341 +0000 UTC m=+1476.172634322" Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.242445 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" podStartSLOduration=6.87676822 podStartE2EDuration="17.242419137s" podCreationTimestamp="2026-02-19 00:30:33 +0000 UTC" firstStartedPulling="2026-02-19 00:30:39.0697275 +0000 UTC m=+1465.034392491" lastFinishedPulling="2026-02-19 00:30:49.435378417 +0000 UTC m=+1475.400043408" observedRunningTime="2026-02-19 00:30:50.235949393 +0000 UTC m=+1476.200614394" watchObservedRunningTime="2026-02-19 00:30:50.242419137 +0000 UTC m=+1476.207084128" Feb 19 00:30:50 crc kubenswrapper[4889]: I0219 00:30:50.299925 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" podStartSLOduration=10.737232609 podStartE2EDuration="21.299898521s" podCreationTimestamp="2026-02-19 00:30:29 +0000 UTC" firstStartedPulling="2026-02-19 00:30:38.896526846 +0000 UTC m=+1464.861191837" lastFinishedPulling="2026-02-19 00:30:49.459192758 +0000 UTC m=+1475.423857749" observedRunningTime="2026-02-19 00:30:50.282602535 +0000 UTC m=+1476.247267526" watchObservedRunningTime="2026-02-19 00:30:50.299898521 +0000 UTC m=+1476.264563512" Feb 19 00:30:51 crc kubenswrapper[4889]: I0219 00:30:51.219928 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerStarted","Data":"0020e40cff983e651277783aa6f43c2b5e90788b172b90e281efbe7179f709cf"} Feb 19 00:30:51 crc kubenswrapper[4889]: I0219 00:30:51.224907 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerStarted","Data":"59d30e86cb9e55f4578c014c6d53caccd555b2b98b3a7e6acb9411cc957c0d63"} Feb 19 00:30:51 crc kubenswrapper[4889]: I0219 00:30:51.246711 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" podStartSLOduration=5.699180368 podStartE2EDuration="6.24668116s" podCreationTimestamp="2026-02-19 00:30:45 +0000 UTC" firstStartedPulling="2026-02-19 00:30:49.695375379 +0000 UTC m=+1475.660040370" lastFinishedPulling="2026-02-19 00:30:50.242876171 +0000 UTC m=+1476.207541162" observedRunningTime="2026-02-19 00:30:51.240908118 +0000 UTC m=+1477.205573119" watchObservedRunningTime="2026-02-19 00:30:51.24668116 +0000 UTC m=+1477.211346171" Feb 19 00:30:51 crc kubenswrapper[4889]: I0219 00:30:51.272235 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" podStartSLOduration=6.800158453 podStartE2EDuration="7.272199775s" podCreationTimestamp="2026-02-19 00:30:44 +0000 UTC" firstStartedPulling="2026-02-19 00:30:49.89795724 +0000 UTC m=+1475.862622231" lastFinishedPulling="2026-02-19 00:30:50.369998562 +0000 UTC m=+1476.334663553" observedRunningTime="2026-02-19 00:30:51.26731461 +0000 UTC m=+1477.231979621" watchObservedRunningTime="2026-02-19 00:30:51.272199775 +0000 UTC m=+1477.236864766" Feb 19 00:30:53 crc kubenswrapper[4889]: I0219 00:30:53.096510 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 19 00:30:53 crc kubenswrapper[4889]: I0219 00:30:53.135499 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 19 00:30:53 crc kubenswrapper[4889]: I0219 00:30:53.281797 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 19 00:31:00 crc kubenswrapper[4889]: I0219 00:31:00.992302 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z9hdq"] Feb 19 00:31:00 crc kubenswrapper[4889]: I0219 00:31:00.993558 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" podUID="3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" containerName="default-interconnect" containerID="cri-o://7e2f973b4432e694bd0a22ff78dfff3ddeb7d1044c8a3a380d92d497394ce71e" gracePeriod=30 Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.371680 4889 generic.go:334] "Generic (PLEG): container finished" podID="f65375c3-144f-4b4a-939c-dfd5d5396469" containerID="4cb5f53175fc5efe76228a040f06f8dca41ee0a3f11ac115cf9a6e01f16a426b" exitCode=0 Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.371761 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerDied","Data":"4cb5f53175fc5efe76228a040f06f8dca41ee0a3f11ac115cf9a6e01f16a426b"} Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.373171 4889 scope.go:117] "RemoveContainer" containerID="4cb5f53175fc5efe76228a040f06f8dca41ee0a3f11ac115cf9a6e01f16a426b" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.379676 4889 generic.go:334] "Generic (PLEG): container finished" podID="3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" containerID="7e2f973b4432e694bd0a22ff78dfff3ddeb7d1044c8a3a380d92d497394ce71e" exitCode=0 Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.379773 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" event={"ID":"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf","Type":"ContainerDied","Data":"7e2f973b4432e694bd0a22ff78dfff3ddeb7d1044c8a3a380d92d497394ce71e"} Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.426521 4889 generic.go:334] "Generic (PLEG): container finished" podID="a81a057f-0e52-490e-89a9-83d27b389e0c" containerID="f08eceec03e52711d1cf4a0a449c1f051db0a1c07dda0c9b8c7c8d986e891da6" exitCode=0 Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.426588 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerDied","Data":"f08eceec03e52711d1cf4a0a449c1f051db0a1c07dda0c9b8c7c8d986e891da6"} Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.427467 4889 scope.go:117] "RemoveContainer" containerID="f08eceec03e52711d1cf4a0a449c1f051db0a1c07dda0c9b8c7c8d986e891da6" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.562005 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646517 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-ca\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646649 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-credentials\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646710 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-config\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646745 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-users\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646792 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-ca\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646822 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-credentials\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.646883 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b98np\" (UniqueName: \"kubernetes.io/projected/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-kube-api-access-b98np\") pod \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\" (UID: \"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf\") " Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.649340 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.666569 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.679387 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.679795 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.680362 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.680849 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 00:31:01 crc kubenswrapper[4889]: I0219 00:31:01.693346 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-kube-api-access-b98np" (OuterVolumeSpecName: "kube-api-access-b98np") pod "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" (UID: "3062ad9c-e9fc-45f7-a3ba-32d5af270bcf"). InnerVolumeSpecName "kube-api-access-b98np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.753882 4889 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.753955 4889 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.753982 4889 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.754002 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b98np\" (UniqueName: \"kubernetes.io/projected/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-kube-api-access-b98np\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.754024 4889 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.754042 4889 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:01.754061 4889 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.173248 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-6jnpn"] Feb 19 00:31:02 crc kubenswrapper[4889]: E0219 00:31:02.173595 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" containerName="default-interconnect" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.173608 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" containerName="default-interconnect" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.173758 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" containerName="default-interconnect" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.174324 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.213246 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-6jnpn"] Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.284432 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.284489 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-sasl-users\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.284522 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.284551 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/fc6002ca-0773-4a75-8f4e-0e14e665bd10-sasl-config\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.284688 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.284991 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqlrt\" (UniqueName: \"kubernetes.io/projected/fc6002ca-0773-4a75-8f4e-0e14e665bd10-kube-api-access-dqlrt\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.285037 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.386696 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-sasl-users\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.387089 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.387238 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/fc6002ca-0773-4a75-8f4e-0e14e665bd10-sasl-config\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.387385 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.387565 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqlrt\" (UniqueName: \"kubernetes.io/projected/fc6002ca-0773-4a75-8f4e-0e14e665bd10-kube-api-access-dqlrt\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.387661 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.387802 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.388447 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/fc6002ca-0773-4a75-8f4e-0e14e665bd10-sasl-config\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.393285 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.395069 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.397479 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-sasl-users\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.397538 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.408901 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/fc6002ca-0773-4a75-8f4e-0e14e665bd10-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.411928 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqlrt\" (UniqueName: \"kubernetes.io/projected/fc6002ca-0773-4a75-8f4e-0e14e665bd10-kube-api-access-dqlrt\") pod \"default-interconnect-68864d46cb-6jnpn\" (UID: \"fc6002ca-0773-4a75-8f4e-0e14e665bd10\") " pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.441485 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" event={"ID":"3062ad9c-e9fc-45f7-a3ba-32d5af270bcf","Type":"ContainerDied","Data":"70411f525495f9d7c7ad1fe354c79aa1447756793f7e192989a333af8591dd0d"} Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.441566 4889 scope.go:117] "RemoveContainer" containerID="7e2f973b4432e694bd0a22ff78dfff3ddeb7d1044c8a3a380d92d497394ce71e" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.441738 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-z9hdq" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.455027 4889 generic.go:334] "Generic (PLEG): container finished" podID="6352877d-bf5a-4b08-9fb0-f426fe70cdd6" containerID="c3952707fa5405b6be04b1890152277ed92a52b7be6d53ae1cb166e6757d48c7" exitCode=0 Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.455155 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerDied","Data":"c3952707fa5405b6be04b1890152277ed92a52b7be6d53ae1cb166e6757d48c7"} Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.455886 4889 scope.go:117] "RemoveContainer" containerID="c3952707fa5405b6be04b1890152277ed92a52b7be6d53ae1cb166e6757d48c7" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.462018 4889 generic.go:334] "Generic (PLEG): container finished" podID="65c0d350-5f06-41d2-bdb0-ec37cf8d42f6" containerID="9677b0cce782d266db3f01a147fe5062485351516168ca15ac455a11e1c9e344" exitCode=0 Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.462089 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerDied","Data":"9677b0cce782d266db3f01a147fe5062485351516168ca15ac455a11e1c9e344"} Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.462912 4889 scope.go:117] "RemoveContainer" containerID="9677b0cce782d266db3f01a147fe5062485351516168ca15ac455a11e1c9e344" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.490578 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerStarted","Data":"67adf4e434f71d11be1c93686030ec9922629731ff0c2e9b7e84a02c0a16a9e6"} Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.504292 4889 generic.go:334] "Generic (PLEG): container finished" podID="6192d5a7-5f82-4bc6-9a05-7044958b5ce2" containerID="4efa893d3485d095ba4a9203fa78b241cdda9096afb953662636a547bc0b664f" exitCode=0 Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.504451 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerDied","Data":"4efa893d3485d095ba4a9203fa78b241cdda9096afb953662636a547bc0b664f"} Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.505428 4889 scope.go:117] "RemoveContainer" containerID="4efa893d3485d095ba4a9203fa78b241cdda9096afb953662636a547bc0b664f" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.511974 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerStarted","Data":"8199745db33e2a04116dad744643f8e4105faaa4728339ddee108ecb1b178707"} Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.560812 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.561373 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z9hdq"] Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.565919 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-z9hdq"] Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.734809 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3062ad9c-e9fc-45f7-a3ba-32d5af270bcf" path="/var/lib/kubelet/pods/3062ad9c-e9fc-45f7-a3ba-32d5af270bcf/volumes" Feb 19 00:31:02 crc kubenswrapper[4889]: I0219 00:31:02.917798 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-6jnpn"] Feb 19 00:31:03 crc kubenswrapper[4889]: I0219 00:31:03.533602 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerStarted","Data":"3cbfec32112c7f4677d64a49eb7b956d8cf81f6bb7a92dce46eb984416718ed3"} Feb 19 00:31:03 crc kubenswrapper[4889]: I0219 00:31:03.538374 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" event={"ID":"fc6002ca-0773-4a75-8f4e-0e14e665bd10","Type":"ContainerStarted","Data":"0b044b783cc53f30b06552b8d5dfadb837817bcb930fb204376c1a5b53543b4c"} Feb 19 00:31:03 crc kubenswrapper[4889]: I0219 00:31:03.538432 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" event={"ID":"fc6002ca-0773-4a75-8f4e-0e14e665bd10","Type":"ContainerStarted","Data":"5d5480d1001323f011f86d644bd7ba7fe38cb94ba437248c5aeb3e6940760955"} Feb 19 00:31:03 crc kubenswrapper[4889]: I0219 00:31:03.544353 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerStarted","Data":"2bd1f8745c9e4a39c2de5b30f6c6fa1c038951b865a7bc7672db05e683ad1e44"} Feb 19 00:31:03 crc kubenswrapper[4889]: I0219 00:31:03.546722 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerStarted","Data":"6f7303dcb82d5e87669e4b8eaa4479340a3541a4ecb32aac1bf880fd066aa350"} Feb 19 00:31:03 crc kubenswrapper[4889]: I0219 00:31:03.669790 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-6jnpn" podStartSLOduration=3.669758904 podStartE2EDuration="3.669758904s" podCreationTimestamp="2026-02-19 00:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:31:03.665743198 +0000 UTC m=+1489.630408189" watchObservedRunningTime="2026-02-19 00:31:03.669758904 +0000 UTC m=+1489.634423895" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.683212 4889 generic.go:334] "Generic (PLEG): container finished" podID="6352877d-bf5a-4b08-9fb0-f426fe70cdd6" containerID="2bd1f8745c9e4a39c2de5b30f6c6fa1c038951b865a7bc7672db05e683ad1e44" exitCode=0 Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.683257 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerDied","Data":"2bd1f8745c9e4a39c2de5b30f6c6fa1c038951b865a7bc7672db05e683ad1e44"} Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.683333 4889 scope.go:117] "RemoveContainer" containerID="c3952707fa5405b6be04b1890152277ed92a52b7be6d53ae1cb166e6757d48c7" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.684252 4889 scope.go:117] "RemoveContainer" containerID="2bd1f8745c9e4a39c2de5b30f6c6fa1c038951b865a7bc7672db05e683ad1e44" Feb 19 00:31:04 crc kubenswrapper[4889]: E0219 00:31:04.684611 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6_service-telemetry(6352877d-bf5a-4b08-9fb0-f426fe70cdd6)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" podUID="6352877d-bf5a-4b08-9fb0-f426fe70cdd6" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.695131 4889 generic.go:334] "Generic (PLEG): container finished" podID="65c0d350-5f06-41d2-bdb0-ec37cf8d42f6" containerID="6f7303dcb82d5e87669e4b8eaa4479340a3541a4ecb32aac1bf880fd066aa350" exitCode=0 Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.695229 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerDied","Data":"6f7303dcb82d5e87669e4b8eaa4479340a3541a4ecb32aac1bf880fd066aa350"} Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.696193 4889 scope.go:117] "RemoveContainer" containerID="6f7303dcb82d5e87669e4b8eaa4479340a3541a4ecb32aac1bf880fd066aa350" Feb 19 00:31:04 crc kubenswrapper[4889]: E0219 00:31:04.696509 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr_service-telemetry(65c0d350-5f06-41d2-bdb0-ec37cf8d42f6)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" podUID="65c0d350-5f06-41d2-bdb0-ec37cf8d42f6" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.720420 4889 generic.go:334] "Generic (PLEG): container finished" podID="a81a057f-0e52-490e-89a9-83d27b389e0c" containerID="67adf4e434f71d11be1c93686030ec9922629731ff0c2e9b7e84a02c0a16a9e6" exitCode=0 Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.720495 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerDied","Data":"67adf4e434f71d11be1c93686030ec9922629731ff0c2e9b7e84a02c0a16a9e6"} Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.721179 4889 scope.go:117] "RemoveContainer" containerID="67adf4e434f71d11be1c93686030ec9922629731ff0c2e9b7e84a02c0a16a9e6" Feb 19 00:31:04 crc kubenswrapper[4889]: E0219 00:31:04.721418 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg_service-telemetry(a81a057f-0e52-490e-89a9-83d27b389e0c)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" podUID="a81a057f-0e52-490e-89a9-83d27b389e0c" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.730750 4889 scope.go:117] "RemoveContainer" containerID="9677b0cce782d266db3f01a147fe5062485351516168ca15ac455a11e1c9e344" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.740595 4889 generic.go:334] "Generic (PLEG): container finished" podID="6192d5a7-5f82-4bc6-9a05-7044958b5ce2" containerID="3cbfec32112c7f4677d64a49eb7b956d8cf81f6bb7a92dce46eb984416718ed3" exitCode=0 Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.758209 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerDied","Data":"3cbfec32112c7f4677d64a49eb7b956d8cf81f6bb7a92dce46eb984416718ed3"} Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.763898 4889 generic.go:334] "Generic (PLEG): container finished" podID="f65375c3-144f-4b4a-939c-dfd5d5396469" containerID="8199745db33e2a04116dad744643f8e4105faaa4728339ddee108ecb1b178707" exitCode=0 Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.764867 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerDied","Data":"8199745db33e2a04116dad744643f8e4105faaa4728339ddee108ecb1b178707"} Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.765170 4889 scope.go:117] "RemoveContainer" containerID="8199745db33e2a04116dad744643f8e4105faaa4728339ddee108ecb1b178707" Feb 19 00:31:04 crc kubenswrapper[4889]: E0219 00:31:04.765367 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf_service-telemetry(f65375c3-144f-4b4a-939c-dfd5d5396469)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" podUID="f65375c3-144f-4b4a-939c-dfd5d5396469" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.777287 4889 scope.go:117] "RemoveContainer" containerID="3cbfec32112c7f4677d64a49eb7b956d8cf81f6bb7a92dce46eb984416718ed3" Feb 19 00:31:04 crc kubenswrapper[4889]: E0219 00:31:04.777592 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-96874f644-5nw5m_service-telemetry(6192d5a7-5f82-4bc6-9a05-7044958b5ce2)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" podUID="6192d5a7-5f82-4bc6-9a05-7044958b5ce2" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.803206 4889 scope.go:117] "RemoveContainer" containerID="f08eceec03e52711d1cf4a0a449c1f051db0a1c07dda0c9b8c7c8d986e891da6" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.856511 4889 scope.go:117] "RemoveContainer" containerID="4efa893d3485d095ba4a9203fa78b241cdda9096afb953662636a547bc0b664f" Feb 19 00:31:04 crc kubenswrapper[4889]: I0219 00:31:04.914443 4889 scope.go:117] "RemoveContainer" containerID="4cb5f53175fc5efe76228a040f06f8dca41ee0a3f11ac115cf9a6e01f16a426b" Feb 19 00:31:16 crc kubenswrapper[4889]: I0219 00:31:16.726089 4889 scope.go:117] "RemoveContainer" containerID="2bd1f8745c9e4a39c2de5b30f6c6fa1c038951b865a7bc7672db05e683ad1e44" Feb 19 00:31:17 crc kubenswrapper[4889]: I0219 00:31:17.891318 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6" event={"ID":"6352877d-bf5a-4b08-9fb0-f426fe70cdd6","Type":"ContainerStarted","Data":"daa76c6a34b231417e2f155ecec1cdbabc924085f7987c1c80554970f2f3981d"} Feb 19 00:31:18 crc kubenswrapper[4889]: I0219 00:31:18.775564 4889 scope.go:117] "RemoveContainer" containerID="6f7303dcb82d5e87669e4b8eaa4479340a3541a4ecb32aac1bf880fd066aa350" Feb 19 00:31:18 crc kubenswrapper[4889]: I0219 00:31:18.775741 4889 scope.go:117] "RemoveContainer" containerID="8199745db33e2a04116dad744643f8e4105faaa4728339ddee108ecb1b178707" Feb 19 00:31:18 crc kubenswrapper[4889]: I0219 00:31:18.775830 4889 scope.go:117] "RemoveContainer" containerID="67adf4e434f71d11be1c93686030ec9922629731ff0c2e9b7e84a02c0a16a9e6" Feb 19 00:31:19 crc kubenswrapper[4889]: I0219 00:31:19.725790 4889 scope.go:117] "RemoveContainer" containerID="3cbfec32112c7f4677d64a49eb7b956d8cf81f6bb7a92dce46eb984416718ed3" Feb 19 00:31:20 crc kubenswrapper[4889]: I0219 00:31:20.923412 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr" event={"ID":"65c0d350-5f06-41d2-bdb0-ec37cf8d42f6","Type":"ContainerStarted","Data":"d3ec4cf7d4b858a9908e9f3306e7bce227a1dc9cff4f8963f4c08e665a03f6f1"} Feb 19 00:31:20 crc kubenswrapper[4889]: I0219 00:31:20.929783 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg" event={"ID":"a81a057f-0e52-490e-89a9-83d27b389e0c","Type":"ContainerStarted","Data":"4fe4da2dbed8672443a6f10979b9aef4694629fe339f29869db4cac7dba34f78"} Feb 19 00:31:20 crc kubenswrapper[4889]: I0219 00:31:20.932632 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-96874f644-5nw5m" event={"ID":"6192d5a7-5f82-4bc6-9a05-7044958b5ce2","Type":"ContainerStarted","Data":"d01d2e79ce127517103824321c9c3aa91dafaafc8dbec2c2f72e1ab940459d1c"} Feb 19 00:31:20 crc kubenswrapper[4889]: I0219 00:31:20.936533 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf" event={"ID":"f65375c3-144f-4b4a-939c-dfd5d5396469","Type":"ContainerStarted","Data":"5ad42fdd7be513fc4e54ebde6c6b06b834f965baef9849cc1cf395149b9d175e"} Feb 19 00:31:27 crc kubenswrapper[4889]: I0219 00:31:27.996129 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0068ff27-e035-4aac-ada3-5f46690fb384","Type":"ContainerStarted","Data":"8354b64d76c83954ed3a8356f82df85a83486836548e296dea1e85360dde2029"} Feb 19 00:31:27 crc kubenswrapper[4889]: I0219 00:31:27.996977 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"0068ff27-e035-4aac-ada3-5f46690fb384","Type":"ContainerStarted","Data":"a6df1c7e981e019bb9a1d9dc5fb3987dc1b1c24165798da4438a3805787e5fdd"} Feb 19 00:31:28 crc kubenswrapper[4889]: I0219 00:31:28.035808 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=25.588515074 podStartE2EDuration="1m23.035772663s" podCreationTimestamp="2026-02-19 00:30:05 +0000 UTC" firstStartedPulling="2026-02-19 00:30:30.183168588 +0000 UTC m=+1456.147833579" lastFinishedPulling="2026-02-19 00:31:27.630426177 +0000 UTC m=+1513.595091168" observedRunningTime="2026-02-19 00:31:28.019725109 +0000 UTC m=+1513.984390100" watchObservedRunningTime="2026-02-19 00:31:28.035772663 +0000 UTC m=+1514.000437654" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.377363 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.381195 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.384778 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.385174 4889 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.405567 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.465112 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/8ec38d7e-0a74-4610-a04b-9356078917c5-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.465211 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4kns\" (UniqueName: \"kubernetes.io/projected/8ec38d7e-0a74-4610-a04b-9356078917c5-kube-api-access-t4kns\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.465297 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/8ec38d7e-0a74-4610-a04b-9356078917c5-qdr-test-config\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.566817 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/8ec38d7e-0a74-4610-a04b-9356078917c5-qdr-test-config\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.567214 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/8ec38d7e-0a74-4610-a04b-9356078917c5-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.567431 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4kns\" (UniqueName: \"kubernetes.io/projected/8ec38d7e-0a74-4610-a04b-9356078917c5-kube-api-access-t4kns\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.567654 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/8ec38d7e-0a74-4610-a04b-9356078917c5-qdr-test-config\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.575354 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/8ec38d7e-0a74-4610-a04b-9356078917c5-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.597961 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4kns\" (UniqueName: \"kubernetes.io/projected/8ec38d7e-0a74-4610-a04b-9356078917c5-kube-api-access-t4kns\") pod \"qdr-test\" (UID: \"8ec38d7e-0a74-4610-a04b-9356078917c5\") " pod="service-telemetry/qdr-test" Feb 19 00:31:38 crc kubenswrapper[4889]: I0219 00:31:38.701273 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:31:39 crc kubenswrapper[4889]: I0219 00:31:39.154988 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:31:39 crc kubenswrapper[4889]: W0219 00:31:39.178063 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ec38d7e_0a74_4610_a04b_9356078917c5.slice/crio-8efafdc1a937ca2a76563d4fe6b2991eaaeb754a69ddf4a916ed20150bfa39e6 WatchSource:0}: Error finding container 8efafdc1a937ca2a76563d4fe6b2991eaaeb754a69ddf4a916ed20150bfa39e6: Status 404 returned error can't find the container with id 8efafdc1a937ca2a76563d4fe6b2991eaaeb754a69ddf4a916ed20150bfa39e6 Feb 19 00:31:40 crc kubenswrapper[4889]: I0219 00:31:40.097744 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"8ec38d7e-0a74-4610-a04b-9356078917c5","Type":"ContainerStarted","Data":"8efafdc1a937ca2a76563d4fe6b2991eaaeb754a69ddf4a916ed20150bfa39e6"} Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.188085 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"8ec38d7e-0a74-4610-a04b-9356078917c5","Type":"ContainerStarted","Data":"569ad26791972032d424dee9041d21925b838192e01fee4064d4e58d55d35342"} Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.219034 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.592019068 podStartE2EDuration="10.219004697s" podCreationTimestamp="2026-02-19 00:31:38 +0000 UTC" firstStartedPulling="2026-02-19 00:31:39.180346984 +0000 UTC m=+1525.145011975" lastFinishedPulling="2026-02-19 00:31:47.807332613 +0000 UTC m=+1533.771997604" observedRunningTime="2026-02-19 00:31:48.210926764 +0000 UTC m=+1534.175591775" watchObservedRunningTime="2026-02-19 00:31:48.219004697 +0000 UTC m=+1534.183669688" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.597771 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8qvpk"] Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.601864 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.606328 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.606440 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.606669 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.606893 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.607207 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.607445 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.638984 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8qvpk"] Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745498 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-config\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745546 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwq9v\" (UniqueName: \"kubernetes.io/projected/3bfe486f-9dde-4250-b0df-834592b6934b-kube-api-access-wwq9v\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745615 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-healthcheck-log\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745640 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-sensubility-config\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745740 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745814 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.745844 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.846827 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-healthcheck-log\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.846897 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-sensubility-config\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.846941 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.846981 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.847006 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.847039 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-config\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.847056 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwq9v\" (UniqueName: \"kubernetes.io/projected/3bfe486f-9dde-4250-b0df-834592b6934b-kube-api-access-wwq9v\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.848036 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.848154 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.848375 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-config\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.848401 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.848769 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-healthcheck-log\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.848927 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-sensubility-config\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.872017 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwq9v\" (UniqueName: \"kubernetes.io/projected/3bfe486f-9dde-4250-b0df-834592b6934b-kube-api-access-wwq9v\") pod \"stf-smoketest-smoke1-8qvpk\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:48 crc kubenswrapper[4889]: I0219 00:31:48.932153 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.003306 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.004492 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.011069 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.152842 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tc2c\" (UniqueName: \"kubernetes.io/projected/958cda51-a8a1-4839-8aa8-7f6dfd11080f-kube-api-access-2tc2c\") pod \"curl\" (UID: \"958cda51-a8a1-4839-8aa8-7f6dfd11080f\") " pod="service-telemetry/curl" Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.254744 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tc2c\" (UniqueName: \"kubernetes.io/projected/958cda51-a8a1-4839-8aa8-7f6dfd11080f-kube-api-access-2tc2c\") pod \"curl\" (UID: \"958cda51-a8a1-4839-8aa8-7f6dfd11080f\") " pod="service-telemetry/curl" Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.276039 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tc2c\" (UniqueName: \"kubernetes.io/projected/958cda51-a8a1-4839-8aa8-7f6dfd11080f-kube-api-access-2tc2c\") pod \"curl\" (UID: \"958cda51-a8a1-4839-8aa8-7f6dfd11080f\") " pod="service-telemetry/curl" Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.332726 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.414168 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-8qvpk"] Feb 19 00:31:49 crc kubenswrapper[4889]: W0219 00:31:49.420881 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bfe486f_9dde_4250_b0df_834592b6934b.slice/crio-a19f4cd3be8fbd9de52928471911bfa1b532fc91a853ec895bb6eb01480718ba WatchSource:0}: Error finding container a19f4cd3be8fbd9de52928471911bfa1b532fc91a853ec895bb6eb01480718ba: Status 404 returned error can't find the container with id a19f4cd3be8fbd9de52928471911bfa1b532fc91a853ec895bb6eb01480718ba Feb 19 00:31:49 crc kubenswrapper[4889]: I0219 00:31:49.593305 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:31:50 crc kubenswrapper[4889]: I0219 00:31:50.218813 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" event={"ID":"3bfe486f-9dde-4250-b0df-834592b6934b","Type":"ContainerStarted","Data":"a19f4cd3be8fbd9de52928471911bfa1b532fc91a853ec895bb6eb01480718ba"} Feb 19 00:31:50 crc kubenswrapper[4889]: I0219 00:31:50.220464 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"958cda51-a8a1-4839-8aa8-7f6dfd11080f","Type":"ContainerStarted","Data":"014920b1739791102f35fb455ce6552b7bae7e791d4637aa824a270198c5f6f1"} Feb 19 00:31:52 crc kubenswrapper[4889]: I0219 00:31:52.243276 4889 generic.go:334] "Generic (PLEG): container finished" podID="958cda51-a8a1-4839-8aa8-7f6dfd11080f" containerID="c4db7a26fa7ec7baf58881847d481ccf10d2435c986d24b620deda785ee1c902" exitCode=0 Feb 19 00:31:52 crc kubenswrapper[4889]: I0219 00:31:52.243376 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"958cda51-a8a1-4839-8aa8-7f6dfd11080f","Type":"ContainerDied","Data":"c4db7a26fa7ec7baf58881847d481ccf10d2435c986d24b620deda785ee1c902"} Feb 19 00:32:01 crc kubenswrapper[4889]: I0219 00:32:01.890233 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:32:01 crc kubenswrapper[4889]: I0219 00:32:01.977377 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tc2c\" (UniqueName: \"kubernetes.io/projected/958cda51-a8a1-4839-8aa8-7f6dfd11080f-kube-api-access-2tc2c\") pod \"958cda51-a8a1-4839-8aa8-7f6dfd11080f\" (UID: \"958cda51-a8a1-4839-8aa8-7f6dfd11080f\") " Feb 19 00:32:01 crc kubenswrapper[4889]: I0219 00:32:01.984983 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958cda51-a8a1-4839-8aa8-7f6dfd11080f-kube-api-access-2tc2c" (OuterVolumeSpecName: "kube-api-access-2tc2c") pod "958cda51-a8a1-4839-8aa8-7f6dfd11080f" (UID: "958cda51-a8a1-4839-8aa8-7f6dfd11080f"). InnerVolumeSpecName "kube-api-access-2tc2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:32:02 crc kubenswrapper[4889]: I0219 00:32:02.079456 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tc2c\" (UniqueName: \"kubernetes.io/projected/958cda51-a8a1-4839-8aa8-7f6dfd11080f-kube-api-access-2tc2c\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:02 crc kubenswrapper[4889]: I0219 00:32:02.089738 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_958cda51-a8a1-4839-8aa8-7f6dfd11080f/curl/0.log" Feb 19 00:32:02 crc kubenswrapper[4889]: I0219 00:32:02.330171 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"958cda51-a8a1-4839-8aa8-7f6dfd11080f","Type":"ContainerDied","Data":"014920b1739791102f35fb455ce6552b7bae7e791d4637aa824a270198c5f6f1"} Feb 19 00:32:02 crc kubenswrapper[4889]: I0219 00:32:02.330639 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014920b1739791102f35fb455ce6552b7bae7e791d4637aa824a270198c5f6f1" Feb 19 00:32:02 crc kubenswrapper[4889]: I0219 00:32:02.330270 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:32:02 crc kubenswrapper[4889]: I0219 00:32:02.397289 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-kk4m4_62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1/prometheus-webhook-snmp/0.log" Feb 19 00:32:03 crc kubenswrapper[4889]: E0219 00:32:03.400322 4889 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleomastercentos9/openstack-collectd:current-tripleo" Feb 19 00:32:03 crc kubenswrapper[4889]: E0219 00:32:03.401067 4889 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:smoketest-collectd,Image:quay.io/tripleomastercentos9/openstack-collectd:current-tripleo,Command:[/smoketest_collectd_entrypoint.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CLOUDNAME,Value:smoke1,ValueFrom:nil,},EnvVar{Name:ELASTICSEARCH_AUTH_PASS,Value:i5v7KQJjLbVa4gkaeb0VCZoA,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_AUTH_TOKEN,Value:eyJhbGciOiJSUzI1NiIsImtpZCI6InF6SnFxNFFjbVk5VmJQZ2dNMmUxdHFmTlJlVWx4UDhSTlhIamV3RUx4WU0ifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTc3MTQ2NDY5MywiaWF0IjoxNzcxNDYxMDkzLCJpc3MiOiJodHRwczovL2t1YmVybmV0ZXMuZGVmYXVsdC5zdmMiLCJqdGkiOiIyZTBlNTE1Mi05ZWEyLTRkYzgtYWIxOC0yNDE1YjA3MWE4NmMiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6InNlcnZpY2UtdGVsZW1ldHJ5Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InN0Zi1wcm9tZXRoZXVzLXJlYWRlciIsInVpZCI6Ijg3NjIyZWQ3LTE3MmUtNDFhNS05YzVhLTFkMmUwYTlhY2I2NCJ9fSwibmJmIjoxNzcxNDYxMDkzLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6c2VydmljZS10ZWxlbWV0cnk6c3RmLXByb21ldGhldXMtcmVhZGVyIn0.HHUvJF1U_pxTzx_21nFuspHmGoiMa0A3CzmRjPpUxdCaCkyUKJTk0TWQKJk8ydU7l4BwlcFkmByJpRTs8Iql4jTxGX7yCkRpwmOPV8LFKFMaVzTfu23xoSck0ygjZLPDPvY1pxGyK3-0qHxcrTMcf9jRoDHWzb6EENXPklwc8HT9zQ3DvitCu1sv2GzmWRz1bHV6CtrL-Ex2_cje03TxBjns7KpDw6Iza5qJHezcMb7-XrknWjC3p9RpUP3wN-fvR0VfI_AewOodDRuBvngeOIjsc5ccSTP3yMIekWhFYwiq7yV7uogJN9nROvbdlgg2Gy5JPRzTggTwHHP_3XiAfHanrrhzdsAi1bmCd691NtMbA6-oTwShWCYLj_hltRgiBfHte4XZhklYbpEMR5LGmOaxb8jOHyYACLLmtXmcWZV3Dw54KP8aYw5-BYuYfzBIkjx3Op1L3ZfJ0Lyqf0Bh0uhkO7jQuSpNQ_Rj7i0xwBdOOEVh8civZZd_W5at2f03hB9r0Y-5bwdSZfRQaozHb5JYvHDzGcAPnuoQngEhLmBNT9KcDnPwveJSjMoecw6kLgqwBeCWTY_9wrS_mOBQSAoMQoJTKH7KHtBa8u4ifSxk31r5-dfKWUGJq_KS8tHnPWhz9vtXWPQxeH3FmN6mCDiGeNtQRtvdnxU-Me7MlRs,ValueFrom:nil,},EnvVar{Name:OBSERVABILITY_STRATEGY,Value:<>,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:collectd-config,ReadOnly:false,MountPath:/etc/minimal-collectd.conf.template,SubPath:minimal-collectd.conf.template,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:sensubility-config,ReadOnly:false,MountPath:/etc/collectd-sensubility.conf,SubPath:collectd-sensubility.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:healthcheck-log,ReadOnly:false,MountPath:/healthcheck.log,SubPath:healthcheck.log,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:collectd-entrypoint-script,ReadOnly:false,MountPath:/smoketest_collectd_entrypoint.sh,SubPath:smoketest_collectd_entrypoint.sh,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwq9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod stf-smoketest-smoke1-8qvpk_service-telemetry(3bfe486f-9dde-4250-b0df-834592b6934b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 00:32:11 crc kubenswrapper[4889]: E0219 00:32:11.170103 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" Feb 19 00:32:11 crc kubenswrapper[4889]: I0219 00:32:11.410592 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" event={"ID":"3bfe486f-9dde-4250-b0df-834592b6934b","Type":"ContainerStarted","Data":"0630f5291c62284486b353c71060cc6903667a676d56ed1f471e38f58976e21c"} Feb 19 00:32:11 crc kubenswrapper[4889]: E0219 00:32:11.412905 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-collectd:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" Feb 19 00:32:12 crc kubenswrapper[4889]: E0219 00:32:12.419736 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"smoketest-collectd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleomastercentos9/openstack-collectd:current-tripleo\\\"\"" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" Feb 19 00:32:26 crc kubenswrapper[4889]: I0219 00:32:26.662381 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" event={"ID":"3bfe486f-9dde-4250-b0df-834592b6934b","Type":"ContainerStarted","Data":"e176d93422402652450685b0b741906456c8e4208530194a2532025397a5a9a7"} Feb 19 00:32:26 crc kubenswrapper[4889]: I0219 00:32:26.687483 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" podStartSLOduration=1.739740235 podStartE2EDuration="38.68745609s" podCreationTimestamp="2026-02-19 00:31:48 +0000 UTC" firstStartedPulling="2026-02-19 00:31:49.423253144 +0000 UTC m=+1535.387918145" lastFinishedPulling="2026-02-19 00:32:26.370968999 +0000 UTC m=+1572.335634000" observedRunningTime="2026-02-19 00:32:26.682845046 +0000 UTC m=+1572.647510037" watchObservedRunningTime="2026-02-19 00:32:26.68745609 +0000 UTC m=+1572.652121081" Feb 19 00:32:32 crc kubenswrapper[4889]: I0219 00:32:32.555871 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-kk4m4_62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1/prometheus-webhook-snmp/0.log" Feb 19 00:32:43 crc kubenswrapper[4889]: I0219 00:32:43.793574 4889 generic.go:334] "Generic (PLEG): container finished" podID="3bfe486f-9dde-4250-b0df-834592b6934b" containerID="0630f5291c62284486b353c71060cc6903667a676d56ed1f471e38f58976e21c" exitCode=0 Feb 19 00:32:43 crc kubenswrapper[4889]: I0219 00:32:43.793702 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" event={"ID":"3bfe486f-9dde-4250-b0df-834592b6934b","Type":"ContainerDied","Data":"0630f5291c62284486b353c71060cc6903667a676d56ed1f471e38f58976e21c"} Feb 19 00:32:43 crc kubenswrapper[4889]: I0219 00:32:43.795446 4889 scope.go:117] "RemoveContainer" containerID="0630f5291c62284486b353c71060cc6903667a676d56ed1f471e38f58976e21c" Feb 19 00:33:00 crc kubenswrapper[4889]: I0219 00:33:00.941994 4889 generic.go:334] "Generic (PLEG): container finished" podID="3bfe486f-9dde-4250-b0df-834592b6934b" containerID="e176d93422402652450685b0b741906456c8e4208530194a2532025397a5a9a7" exitCode=0 Feb 19 00:33:00 crc kubenswrapper[4889]: I0219 00:33:00.942119 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" event={"ID":"3bfe486f-9dde-4250-b0df-834592b6934b","Type":"ContainerDied","Data":"e176d93422402652450685b0b741906456c8e4208530194a2532025397a5a9a7"} Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.228838 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277451 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-config\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277649 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-healthcheck-log\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277680 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-entrypoint-script\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277712 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-publisher\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277733 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-entrypoint-script\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277793 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-sensubility-config\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.277903 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwq9v\" (UniqueName: \"kubernetes.io/projected/3bfe486f-9dde-4250-b0df-834592b6934b-kube-api-access-wwq9v\") pod \"3bfe486f-9dde-4250-b0df-834592b6934b\" (UID: \"3bfe486f-9dde-4250-b0df-834592b6934b\") " Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.297761 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfe486f-9dde-4250-b0df-834592b6934b-kube-api-access-wwq9v" (OuterVolumeSpecName: "kube-api-access-wwq9v") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "kube-api-access-wwq9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.306529 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.313694 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.314352 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.319281 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.320178 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.327050 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "3bfe486f-9dde-4250-b0df-834592b6934b" (UID: "3bfe486f-9dde-4250-b0df-834592b6934b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379314 4889 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379357 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwq9v\" (UniqueName: \"kubernetes.io/projected/3bfe486f-9dde-4250-b0df-834592b6934b-kube-api-access-wwq9v\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379371 4889 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379382 4889 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379395 4889 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379403 4889 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.379413 4889 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3bfe486f-9dde-4250-b0df-834592b6934b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.961069 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" event={"ID":"3bfe486f-9dde-4250-b0df-834592b6934b","Type":"ContainerDied","Data":"a19f4cd3be8fbd9de52928471911bfa1b532fc91a853ec895bb6eb01480718ba"} Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.961120 4889 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a19f4cd3be8fbd9de52928471911bfa1b532fc91a853ec895bb6eb01480718ba" Feb 19 00:33:02 crc kubenswrapper[4889]: I0219 00:33:02.961134 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-8qvpk" Feb 19 00:33:04 crc kubenswrapper[4889]: I0219 00:33:04.489851 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8qvpk_3bfe486f-9dde-4250-b0df-834592b6934b/smoketest-collectd/0.log" Feb 19 00:33:04 crc kubenswrapper[4889]: I0219 00:33:04.799263 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-8qvpk_3bfe486f-9dde-4250-b0df-834592b6934b/smoketest-ceilometer/0.log" Feb 19 00:33:05 crc kubenswrapper[4889]: I0219 00:33:05.166266 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-6jnpn_fc6002ca-0773-4a75-8f4e-0e14e665bd10/default-interconnect/0.log" Feb 19 00:33:05 crc kubenswrapper[4889]: I0219 00:33:05.498044 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf_f65375c3-144f-4b4a-939c-dfd5d5396469/bridge/2.log" Feb 19 00:33:05 crc kubenswrapper[4889]: I0219 00:33:05.842023 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-cxxkf_f65375c3-144f-4b4a-939c-dfd5d5396469/sg-core/0.log" Feb 19 00:33:06 crc kubenswrapper[4889]: I0219 00:33:06.155555 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-96874f644-5nw5m_6192d5a7-5f82-4bc6-9a05-7044958b5ce2/bridge/2.log" Feb 19 00:33:06 crc kubenswrapper[4889]: I0219 00:33:06.430026 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-96874f644-5nw5m_6192d5a7-5f82-4bc6-9a05-7044958b5ce2/sg-core/0.log" Feb 19 00:33:06 crc kubenswrapper[4889]: I0219 00:33:06.671251 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg_a81a057f-0e52-490e-89a9-83d27b389e0c/bridge/2.log" Feb 19 00:33:06 crc kubenswrapper[4889]: I0219 00:33:06.933189 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-dkqbg_a81a057f-0e52-490e-89a9-83d27b389e0c/sg-core/0.log" Feb 19 00:33:07 crc kubenswrapper[4889]: I0219 00:33:07.180912 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6_6352877d-bf5a-4b08-9fb0-f426fe70cdd6/bridge/2.log" Feb 19 00:33:07 crc kubenswrapper[4889]: I0219 00:33:07.448929 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-c85d7569d-zr5j6_6352877d-bf5a-4b08-9fb0-f426fe70cdd6/sg-core/0.log" Feb 19 00:33:07 crc kubenswrapper[4889]: I0219 00:33:07.755312 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr_65c0d350-5f06-41d2-bdb0-ec37cf8d42f6/bridge/2.log" Feb 19 00:33:07 crc kubenswrapper[4889]: I0219 00:33:07.783786 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:33:07 crc kubenswrapper[4889]: I0219 00:33:07.783855 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:33:08 crc kubenswrapper[4889]: I0219 00:33:08.031491 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-p55gr_65c0d350-5f06-41d2-bdb0-ec37cf8d42f6/sg-core/0.log" Feb 19 00:33:11 crc kubenswrapper[4889]: I0219 00:33:11.456257 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-7954c5f85d-lz7ws_d9ab18a7-ef5a-441e-825c-7b8cbba10d03/operator/0.log" Feb 19 00:33:11 crc kubenswrapper[4889]: I0219 00:33:11.720333 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_fb22c130-bdcd-4e77-8aea-a731d9d7fad7/prometheus/0.log" Feb 19 00:33:12 crc kubenswrapper[4889]: I0219 00:33:12.054651 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_ec73ead9-66c6-4de8-8def-ab772839b617/elasticsearch/0.log" Feb 19 00:33:12 crc kubenswrapper[4889]: I0219 00:33:12.349936 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-kk4m4_62f71d34-c015-4b2c-ab1c-6ae0b8a5e0a1/prometheus-webhook-snmp/0.log" Feb 19 00:33:12 crc kubenswrapper[4889]: I0219 00:33:12.633565 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_0068ff27-e035-4aac-ada3-5f46690fb384/alertmanager/0.log" Feb 19 00:33:15 crc kubenswrapper[4889]: I0219 00:33:15.541093 4889 scope.go:117] "RemoveContainer" containerID="a90516d91ff40681cc0c464fc8831c940cd090dfc183ec37423e290283c4b972" Feb 19 00:33:15 crc kubenswrapper[4889]: I0219 00:33:15.576679 4889 scope.go:117] "RemoveContainer" containerID="24c3b40c6b81070917b4c9cd6602015820a76f11881ca666aafc75a45ad84028" Feb 19 00:33:27 crc kubenswrapper[4889]: I0219 00:33:27.223373 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6f9547d677-x9srz_7d1b7264-6b0f-4fac-997e-ec0b9f69fd21/operator/0.log" Feb 19 00:33:30 crc kubenswrapper[4889]: I0219 00:33:30.439889 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-7954c5f85d-lz7ws_d9ab18a7-ef5a-441e-825c-7b8cbba10d03/operator/0.log" Feb 19 00:33:30 crc kubenswrapper[4889]: I0219 00:33:30.712106 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_8ec38d7e-0a74-4610-a04b-9356078917c5/qdr/0.log" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787148 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fs8x2"] Feb 19 00:33:31 crc kubenswrapper[4889]: E0219 00:33:31.787561 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" containerName="smoketest-ceilometer" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787576 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" containerName="smoketest-ceilometer" Feb 19 00:33:31 crc kubenswrapper[4889]: E0219 00:33:31.787587 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958cda51-a8a1-4839-8aa8-7f6dfd11080f" containerName="curl" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787593 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="958cda51-a8a1-4839-8aa8-7f6dfd11080f" containerName="curl" Feb 19 00:33:31 crc kubenswrapper[4889]: E0219 00:33:31.787604 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" containerName="smoketest-collectd" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787610 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" containerName="smoketest-collectd" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787737 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" containerName="smoketest-collectd" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787755 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfe486f-9dde-4250-b0df-834592b6934b" containerName="smoketest-ceilometer" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.787765 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="958cda51-a8a1-4839-8aa8-7f6dfd11080f" containerName="curl" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.788864 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.801843 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs8x2"] Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.833715 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-utilities\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.833866 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-catalog-content\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.833956 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqk5\" (UniqueName: \"kubernetes.io/projected/d78a1663-594e-4447-8685-c0e532b58c7a-kube-api-access-gtqk5\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.935839 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-catalog-content\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.936368 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqk5\" (UniqueName: \"kubernetes.io/projected/d78a1663-594e-4447-8685-c0e532b58c7a-kube-api-access-gtqk5\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.936946 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-utilities\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.936494 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-catalog-content\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.937449 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-utilities\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:31 crc kubenswrapper[4889]: I0219 00:33:31.972691 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqk5\" (UniqueName: \"kubernetes.io/projected/d78a1663-594e-4447-8685-c0e532b58c7a-kube-api-access-gtqk5\") pod \"community-operators-fs8x2\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:32 crc kubenswrapper[4889]: I0219 00:33:32.113241 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:32 crc kubenswrapper[4889]: I0219 00:33:32.443774 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs8x2"] Feb 19 00:33:32 crc kubenswrapper[4889]: W0219 00:33:32.453499 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd78a1663_594e_4447_8685_c0e532b58c7a.slice/crio-380eeb81d6e2ca42ac553b4d5f5f065568307f133532a41d9aa02283fd083322 WatchSource:0}: Error finding container 380eeb81d6e2ca42ac553b4d5f5f065568307f133532a41d9aa02283fd083322: Status 404 returned error can't find the container with id 380eeb81d6e2ca42ac553b4d5f5f065568307f133532a41d9aa02283fd083322 Feb 19 00:33:33 crc kubenswrapper[4889]: I0219 00:33:33.208050 4889 generic.go:334] "Generic (PLEG): container finished" podID="d78a1663-594e-4447-8685-c0e532b58c7a" containerID="7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1" exitCode=0 Feb 19 00:33:33 crc kubenswrapper[4889]: I0219 00:33:33.208105 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerDied","Data":"7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1"} Feb 19 00:33:33 crc kubenswrapper[4889]: I0219 00:33:33.208140 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerStarted","Data":"380eeb81d6e2ca42ac553b4d5f5f065568307f133532a41d9aa02283fd083322"} Feb 19 00:33:34 crc kubenswrapper[4889]: I0219 00:33:34.218615 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerStarted","Data":"095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560"} Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.154925 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4gwk"] Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.157482 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.172034 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4gwk"] Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.230479 4889 generic.go:334] "Generic (PLEG): container finished" podID="d78a1663-594e-4447-8685-c0e532b58c7a" containerID="095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560" exitCode=0 Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.230555 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerDied","Data":"095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560"} Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.296713 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-catalog-content\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.296791 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dg8m\" (UniqueName: \"kubernetes.io/projected/5aa7c452-9337-495c-88ae-7d99174c6ccd-kube-api-access-4dg8m\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.296853 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-utilities\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.398684 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-catalog-content\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.398769 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dg8m\" (UniqueName: \"kubernetes.io/projected/5aa7c452-9337-495c-88ae-7d99174c6ccd-kube-api-access-4dg8m\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.398845 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-utilities\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.399447 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-catalog-content\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.399521 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-utilities\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.428892 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dg8m\" (UniqueName: \"kubernetes.io/projected/5aa7c452-9337-495c-88ae-7d99174c6ccd-kube-api-access-4dg8m\") pod \"redhat-operators-l4gwk\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.475268 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:35 crc kubenswrapper[4889]: I0219 00:33:35.849829 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4gwk"] Feb 19 00:33:35 crc kubenswrapper[4889]: W0219 00:33:35.857663 4889 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5aa7c452_9337_495c_88ae_7d99174c6ccd.slice/crio-dfb9e7657f7783e8de194351b249113bcd60f50055b8588a3a33d48858da6234 WatchSource:0}: Error finding container dfb9e7657f7783e8de194351b249113bcd60f50055b8588a3a33d48858da6234: Status 404 returned error can't find the container with id dfb9e7657f7783e8de194351b249113bcd60f50055b8588a3a33d48858da6234 Feb 19 00:33:36 crc kubenswrapper[4889]: I0219 00:33:36.250499 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerStarted","Data":"fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5"} Feb 19 00:33:36 crc kubenswrapper[4889]: I0219 00:33:36.250571 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerStarted","Data":"dfb9e7657f7783e8de194351b249113bcd60f50055b8588a3a33d48858da6234"} Feb 19 00:33:37 crc kubenswrapper[4889]: I0219 00:33:37.260414 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerStarted","Data":"891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b"} Feb 19 00:33:37 crc kubenswrapper[4889]: I0219 00:33:37.262277 4889 generic.go:334] "Generic (PLEG): container finished" podID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerID="fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5" exitCode=0 Feb 19 00:33:37 crc kubenswrapper[4889]: I0219 00:33:37.262342 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerDied","Data":"fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5"} Feb 19 00:33:37 crc kubenswrapper[4889]: I0219 00:33:37.284857 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fs8x2" podStartSLOduration=3.422764518 podStartE2EDuration="6.284830814s" podCreationTimestamp="2026-02-19 00:33:31 +0000 UTC" firstStartedPulling="2026-02-19 00:33:33.209982765 +0000 UTC m=+1639.174647756" lastFinishedPulling="2026-02-19 00:33:36.072049071 +0000 UTC m=+1642.036714052" observedRunningTime="2026-02-19 00:33:37.283558484 +0000 UTC m=+1643.248223495" watchObservedRunningTime="2026-02-19 00:33:37.284830814 +0000 UTC m=+1643.249495805" Feb 19 00:33:37 crc kubenswrapper[4889]: I0219 00:33:37.781568 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:33:37 crc kubenswrapper[4889]: I0219 00:33:37.781641 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:33:39 crc kubenswrapper[4889]: I0219 00:33:39.280918 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerStarted","Data":"19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6"} Feb 19 00:33:40 crc kubenswrapper[4889]: I0219 00:33:40.295086 4889 generic.go:334] "Generic (PLEG): container finished" podID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerID="19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6" exitCode=0 Feb 19 00:33:40 crc kubenswrapper[4889]: I0219 00:33:40.295231 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerDied","Data":"19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6"} Feb 19 00:33:42 crc kubenswrapper[4889]: I0219 00:33:42.113434 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:42 crc kubenswrapper[4889]: I0219 00:33:42.113846 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:42 crc kubenswrapper[4889]: I0219 00:33:42.169405 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:42 crc kubenswrapper[4889]: I0219 00:33:42.315959 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerStarted","Data":"60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae"} Feb 19 00:33:42 crc kubenswrapper[4889]: I0219 00:33:42.338888 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4gwk" podStartSLOduration=3.736524693 podStartE2EDuration="7.338853477s" podCreationTimestamp="2026-02-19 00:33:35 +0000 UTC" firstStartedPulling="2026-02-19 00:33:37.263473762 +0000 UTC m=+1643.228138753" lastFinishedPulling="2026-02-19 00:33:40.865802536 +0000 UTC m=+1646.830467537" observedRunningTime="2026-02-19 00:33:42.333975574 +0000 UTC m=+1648.298640565" watchObservedRunningTime="2026-02-19 00:33:42.338853477 +0000 UTC m=+1648.303518468" Feb 19 00:33:42 crc kubenswrapper[4889]: I0219 00:33:42.370801 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.344412 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs8x2"] Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.345163 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fs8x2" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="registry-server" containerID="cri-o://891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b" gracePeriod=2 Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.761962 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.903509 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtqk5\" (UniqueName: \"kubernetes.io/projected/d78a1663-594e-4447-8685-c0e532b58c7a-kube-api-access-gtqk5\") pod \"d78a1663-594e-4447-8685-c0e532b58c7a\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.903597 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-utilities\") pod \"d78a1663-594e-4447-8685-c0e532b58c7a\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.903688 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-catalog-content\") pod \"d78a1663-594e-4447-8685-c0e532b58c7a\" (UID: \"d78a1663-594e-4447-8685-c0e532b58c7a\") " Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.906335 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-utilities" (OuterVolumeSpecName: "utilities") pod "d78a1663-594e-4447-8685-c0e532b58c7a" (UID: "d78a1663-594e-4447-8685-c0e532b58c7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.916019 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78a1663-594e-4447-8685-c0e532b58c7a-kube-api-access-gtqk5" (OuterVolumeSpecName: "kube-api-access-gtqk5") pod "d78a1663-594e-4447-8685-c0e532b58c7a" (UID: "d78a1663-594e-4447-8685-c0e532b58c7a"). InnerVolumeSpecName "kube-api-access-gtqk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:33:44 crc kubenswrapper[4889]: I0219 00:33:44.954087 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d78a1663-594e-4447-8685-c0e532b58c7a" (UID: "d78a1663-594e-4447-8685-c0e532b58c7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.006650 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtqk5\" (UniqueName: \"kubernetes.io/projected/d78a1663-594e-4447-8685-c0e532b58c7a-kube-api-access-gtqk5\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.006727 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.006747 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d78a1663-594e-4447-8685-c0e532b58c7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.342458 4889 generic.go:334] "Generic (PLEG): container finished" podID="d78a1663-594e-4447-8685-c0e532b58c7a" containerID="891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b" exitCode=0 Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.342526 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerDied","Data":"891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b"} Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.342583 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs8x2" event={"ID":"d78a1663-594e-4447-8685-c0e532b58c7a","Type":"ContainerDied","Data":"380eeb81d6e2ca42ac553b4d5f5f065568307f133532a41d9aa02283fd083322"} Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.342581 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs8x2" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.342615 4889 scope.go:117] "RemoveContainer" containerID="891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.370918 4889 scope.go:117] "RemoveContainer" containerID="095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.382951 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs8x2"] Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.392000 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fs8x2"] Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.396558 4889 scope.go:117] "RemoveContainer" containerID="7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.418513 4889 scope.go:117] "RemoveContainer" containerID="891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b" Feb 19 00:33:45 crc kubenswrapper[4889]: E0219 00:33:45.419016 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b\": container with ID starting with 891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b not found: ID does not exist" containerID="891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.419112 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b"} err="failed to get container status \"891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b\": rpc error: code = NotFound desc = could not find container \"891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b\": container with ID starting with 891079b2ab660ffb1096aa8a7d75b9f988d343d9c975cfeb6c74ec452632286b not found: ID does not exist" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.419210 4889 scope.go:117] "RemoveContainer" containerID="095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560" Feb 19 00:33:45 crc kubenswrapper[4889]: E0219 00:33:45.419955 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560\": container with ID starting with 095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560 not found: ID does not exist" containerID="095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.420014 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560"} err="failed to get container status \"095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560\": rpc error: code = NotFound desc = could not find container \"095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560\": container with ID starting with 095fcc7ba67aaa7cbbfc23694a49def4fe0740f68dc072395e0cbf2747f6d560 not found: ID does not exist" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.420053 4889 scope.go:117] "RemoveContainer" containerID="7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1" Feb 19 00:33:45 crc kubenswrapper[4889]: E0219 00:33:45.420487 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1\": container with ID starting with 7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1 not found: ID does not exist" containerID="7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.420525 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1"} err="failed to get container status \"7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1\": rpc error: code = NotFound desc = could not find container \"7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1\": container with ID starting with 7d0084fdf3997f01e1e9748d10877472c0184321c2235db16225962674e1fca1 not found: ID does not exist" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.477678 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:45 crc kubenswrapper[4889]: I0219 00:33:45.477730 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:46 crc kubenswrapper[4889]: I0219 00:33:46.532582 4889 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l4gwk" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="registry-server" probeResult="failure" output=< Feb 19 00:33:46 crc kubenswrapper[4889]: timeout: failed to connect service ":50051" within 1s Feb 19 00:33:46 crc kubenswrapper[4889]: > Feb 19 00:33:46 crc kubenswrapper[4889]: I0219 00:33:46.735573 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" path="/var/lib/kubelet/pods/d78a1663-594e-4447-8685-c0e532b58c7a/volumes" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.271608 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-zxtb7/must-gather-424th"] Feb 19 00:33:55 crc kubenswrapper[4889]: E0219 00:33:55.274411 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="extract-utilities" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.274520 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="extract-utilities" Feb 19 00:33:55 crc kubenswrapper[4889]: E0219 00:33:55.274618 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.274699 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4889]: E0219 00:33:55.274818 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="extract-content" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.274922 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="extract-content" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.275241 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78a1663-594e-4447-8685-c0e532b58c7a" containerName="registry-server" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.276459 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.278759 4889 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-zxtb7"/"default-dockercfg-fk2wl" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.281024 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-must-gather-output\") pod \"must-gather-424th\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.281158 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvt4\" (UniqueName: \"kubernetes.io/projected/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-kube-api-access-nvvt4\") pod \"must-gather-424th\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.281470 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zxtb7"/"openshift-service-ca.crt" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.300832 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zxtb7/must-gather-424th"] Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.306715 4889 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-zxtb7"/"kube-root-ca.crt" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.383757 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-must-gather-output\") pod \"must-gather-424th\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.383949 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvvt4\" (UniqueName: \"kubernetes.io/projected/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-kube-api-access-nvvt4\") pod \"must-gather-424th\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.384324 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-must-gather-output\") pod \"must-gather-424th\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.417675 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvvt4\" (UniqueName: \"kubernetes.io/projected/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-kube-api-access-nvvt4\") pod \"must-gather-424th\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.544675 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.597195 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.605203 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.790839 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4gwk"] Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.885201 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-zxtb7/must-gather-424th"] Feb 19 00:33:55 crc kubenswrapper[4889]: I0219 00:33:55.897565 4889 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:33:56 crc kubenswrapper[4889]: I0219 00:33:56.462024 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxtb7/must-gather-424th" event={"ID":"49a4b1a6-f755-4dbe-af2c-f3c6623653b8","Type":"ContainerStarted","Data":"59bcbc173d2703a99e728d912b3a9763876210f7ec380414e0d0ee565c4af2e6"} Feb 19 00:33:57 crc kubenswrapper[4889]: I0219 00:33:57.472737 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4gwk" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="registry-server" containerID="cri-o://60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae" gracePeriod=2 Feb 19 00:33:57 crc kubenswrapper[4889]: I0219 00:33:57.942722 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.032587 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-catalog-content\") pod \"5aa7c452-9337-495c-88ae-7d99174c6ccd\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.032888 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dg8m\" (UniqueName: \"kubernetes.io/projected/5aa7c452-9337-495c-88ae-7d99174c6ccd-kube-api-access-4dg8m\") pod \"5aa7c452-9337-495c-88ae-7d99174c6ccd\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.032999 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-utilities\") pod \"5aa7c452-9337-495c-88ae-7d99174c6ccd\" (UID: \"5aa7c452-9337-495c-88ae-7d99174c6ccd\") " Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.034087 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-utilities" (OuterVolumeSpecName: "utilities") pod "5aa7c452-9337-495c-88ae-7d99174c6ccd" (UID: "5aa7c452-9337-495c-88ae-7d99174c6ccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.041373 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa7c452-9337-495c-88ae-7d99174c6ccd-kube-api-access-4dg8m" (OuterVolumeSpecName: "kube-api-access-4dg8m") pod "5aa7c452-9337-495c-88ae-7d99174c6ccd" (UID: "5aa7c452-9337-495c-88ae-7d99174c6ccd"). InnerVolumeSpecName "kube-api-access-4dg8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.133956 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.134015 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dg8m\" (UniqueName: \"kubernetes.io/projected/5aa7c452-9337-495c-88ae-7d99174c6ccd-kube-api-access-4dg8m\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.193379 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5aa7c452-9337-495c-88ae-7d99174c6ccd" (UID: "5aa7c452-9337-495c-88ae-7d99174c6ccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.235345 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5aa7c452-9337-495c-88ae-7d99174c6ccd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.485180 4889 generic.go:334] "Generic (PLEG): container finished" podID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerID="60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae" exitCode=0 Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.485252 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerDied","Data":"60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae"} Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.485291 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4gwk" event={"ID":"5aa7c452-9337-495c-88ae-7d99174c6ccd","Type":"ContainerDied","Data":"dfb9e7657f7783e8de194351b249113bcd60f50055b8588a3a33d48858da6234"} Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.485300 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4gwk" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.485317 4889 scope.go:117] "RemoveContainer" containerID="60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae" Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.529868 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4gwk"] Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.537314 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4gwk"] Feb 19 00:33:58 crc kubenswrapper[4889]: I0219 00:33:58.736736 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" path="/var/lib/kubelet/pods/5aa7c452-9337-495c-88ae-7d99174c6ccd/volumes" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.548008 4889 scope.go:117] "RemoveContainer" containerID="19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.593953 4889 scope.go:117] "RemoveContainer" containerID="fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.612595 4889 scope.go:117] "RemoveContainer" containerID="60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae" Feb 19 00:34:02 crc kubenswrapper[4889]: E0219 00:34:02.613936 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae\": container with ID starting with 60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae not found: ID does not exist" containerID="60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.613973 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae"} err="failed to get container status \"60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae\": rpc error: code = NotFound desc = could not find container \"60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae\": container with ID starting with 60db4cc741c1e6dad52e98867fbecfa887104d74b0d0d92d95b3a7dff365aaae not found: ID does not exist" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.614000 4889 scope.go:117] "RemoveContainer" containerID="19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6" Feb 19 00:34:02 crc kubenswrapper[4889]: E0219 00:34:02.614346 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6\": container with ID starting with 19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6 not found: ID does not exist" containerID="19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.614376 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6"} err="failed to get container status \"19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6\": rpc error: code = NotFound desc = could not find container \"19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6\": container with ID starting with 19a8886bafd0c5d2c0135e7706a2ac5ad22bd1e69c756953b4fefe5c3913aba6 not found: ID does not exist" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.614393 4889 scope.go:117] "RemoveContainer" containerID="fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5" Feb 19 00:34:02 crc kubenswrapper[4889]: E0219 00:34:02.614779 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5\": container with ID starting with fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5 not found: ID does not exist" containerID="fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5" Feb 19 00:34:02 crc kubenswrapper[4889]: I0219 00:34:02.614797 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5"} err="failed to get container status \"fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5\": rpc error: code = NotFound desc = could not find container \"fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5\": container with ID starting with fe057bd59305a3c75f1396ab5a9d88f38b75bf9af520d8b8dd588a5ae06838d5 not found: ID does not exist" Feb 19 00:34:03 crc kubenswrapper[4889]: I0219 00:34:03.542201 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxtb7/must-gather-424th" event={"ID":"49a4b1a6-f755-4dbe-af2c-f3c6623653b8","Type":"ContainerStarted","Data":"df3437bdc56f7f70a1cfdaa40426ee4da2bf3a44e8d12271fa6b3ab822061b80"} Feb 19 00:34:03 crc kubenswrapper[4889]: I0219 00:34:03.542266 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxtb7/must-gather-424th" event={"ID":"49a4b1a6-f755-4dbe-af2c-f3c6623653b8","Type":"ContainerStarted","Data":"c257fd8f538cb5e6e328840a88cc9be038229ef43db10b7fc3021a40cd85ac88"} Feb 19 00:34:03 crc kubenswrapper[4889]: I0219 00:34:03.561153 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-zxtb7/must-gather-424th" podStartSLOduration=1.834017231 podStartE2EDuration="8.561117817s" podCreationTimestamp="2026-02-19 00:33:55 +0000 UTC" firstStartedPulling="2026-02-19 00:33:55.897235643 +0000 UTC m=+1661.861900634" lastFinishedPulling="2026-02-19 00:34:02.624336229 +0000 UTC m=+1668.589001220" observedRunningTime="2026-02-19 00:34:03.560648034 +0000 UTC m=+1669.525313025" watchObservedRunningTime="2026-02-19 00:34:03.561117817 +0000 UTC m=+1669.525782808" Feb 19 00:34:07 crc kubenswrapper[4889]: I0219 00:34:07.781478 4889 patch_prober.go:28] interesting pod/machine-config-daemon-pcmlw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:34:07 crc kubenswrapper[4889]: I0219 00:34:07.782362 4889 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:34:07 crc kubenswrapper[4889]: I0219 00:34:07.782431 4889 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" Feb 19 00:34:07 crc kubenswrapper[4889]: I0219 00:34:07.783573 4889 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587"} pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:34:07 crc kubenswrapper[4889]: I0219 00:34:07.783660 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" containerName="machine-config-daemon" containerID="cri-o://7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" gracePeriod=600 Feb 19 00:34:07 crc kubenswrapper[4889]: E0219 00:34:07.940466 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:34:08 crc kubenswrapper[4889]: I0219 00:34:08.584848 4889 generic.go:334] "Generic (PLEG): container finished" podID="900d194e-937f-4a59-abba-21ed9f94f24f" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" exitCode=0 Feb 19 00:34:08 crc kubenswrapper[4889]: I0219 00:34:08.584898 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerDied","Data":"7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587"} Feb 19 00:34:08 crc kubenswrapper[4889]: I0219 00:34:08.584977 4889 scope.go:117] "RemoveContainer" containerID="40c89b17b39712776e841fba6d4f65f04b05a7818e694b7b66336b8111d61491" Feb 19 00:34:08 crc kubenswrapper[4889]: I0219 00:34:08.585817 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:34:08 crc kubenswrapper[4889]: E0219 00:34:08.586160 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:34:19 crc kubenswrapper[4889]: I0219 00:34:19.725661 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:34:19 crc kubenswrapper[4889]: E0219 00:34:19.726912 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:34:31 crc kubenswrapper[4889]: I0219 00:34:31.725501 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:34:31 crc kubenswrapper[4889]: E0219 00:34:31.726410 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:34:44 crc kubenswrapper[4889]: I0219 00:34:44.730927 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:34:44 crc kubenswrapper[4889]: E0219 00:34:44.732062 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:34:45 crc kubenswrapper[4889]: I0219 00:34:45.652673 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-79hxc_d67eea0c-6059-4e76-8bdf-0fb3c25e2717/control-plane-machine-set-operator/0.log" Feb 19 00:34:45 crc kubenswrapper[4889]: I0219 00:34:45.833073 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pmgn8_953c0d0a-1dec-4045-af86-0c6547b3a336/kube-rbac-proxy/0.log" Feb 19 00:34:45 crc kubenswrapper[4889]: I0219 00:34:45.862623 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pmgn8_953c0d0a-1dec-4045-af86-0c6547b3a336/machine-api-operator/0.log" Feb 19 00:34:57 crc kubenswrapper[4889]: I0219 00:34:57.725214 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:34:57 crc kubenswrapper[4889]: E0219 00:34:57.726464 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:34:59 crc kubenswrapper[4889]: I0219 00:34:59.056490 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-t42lr_8d3484ae-9bcc-4f3a-8b10-308202ec491e/cert-manager-controller/0.log" Feb 19 00:34:59 crc kubenswrapper[4889]: I0219 00:34:59.239177 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-wmqht_60be1caf-5351-4401-b59b-7213eef1a9b0/cert-manager-cainjector/0.log" Feb 19 00:34:59 crc kubenswrapper[4889]: I0219 00:34:59.322201 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-jkdvp_0bb9cfc5-52ef-45c0-b3d9-0bbc4885d4af/cert-manager-webhook/0.log" Feb 19 00:35:10 crc kubenswrapper[4889]: I0219 00:35:10.725458 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:35:10 crc kubenswrapper[4889]: E0219 00:35:10.728069 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:35:13 crc kubenswrapper[4889]: I0219 00:35:13.370974 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x78h7_211e50ac-fbca-44f5-8e00-d6462342ee96/prometheus-operator/0.log" Feb 19 00:35:13 crc kubenswrapper[4889]: I0219 00:35:13.507191 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv_e9ab512a-6196-4d62-a32d-b869b3d080bf/prometheus-operator-admission-webhook/0.log" Feb 19 00:35:13 crc kubenswrapper[4889]: I0219 00:35:13.585717 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84_d7b1bacc-63b6-4446-a2b1-a1306e34f89c/prometheus-operator-admission-webhook/0.log" Feb 19 00:35:13 crc kubenswrapper[4889]: I0219 00:35:13.711470 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-q88r4_e398c065-2809-4a64-9ccc-801f6eb7d8b7/operator/0.log" Feb 19 00:35:13 crc kubenswrapper[4889]: I0219 00:35:13.766140 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tr5l7_e0184625-e425-46d0-ab8a-13da9ced8a6f/perses-operator/0.log" Feb 19 00:35:21 crc kubenswrapper[4889]: I0219 00:35:21.725536 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:35:21 crc kubenswrapper[4889]: E0219 00:35:21.726549 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:35:27 crc kubenswrapper[4889]: I0219 00:35:27.859004 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/util/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.031863 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/pull/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.036150 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/util/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.053638 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/pull/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.247007 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/util/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.275080 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/pull/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.293736 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1n4ccj_ac4eb3b1-fdd6-46dc-bfb9-9a7f2edac251/extract/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.447374 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/util/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.663732 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/util/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.664794 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/pull/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.710416 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/pull/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.935001 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/util/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.957113 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/pull/0.log" Feb 19 00:35:28 crc kubenswrapper[4889]: I0219 00:35:28.971888 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fbf2g6_566b088f-3462-4687-bd78-db2a2cf860cb/extract/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.117080 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/util/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.318884 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/util/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.374446 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/pull/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.376442 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/pull/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.497404 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/util/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.534567 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/pull/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.548801 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5lr4wv_2c641527-033b-418b-9aba-d399b711acaf/extract/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.712628 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/util/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.866683 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/util/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.879824 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/pull/0.log" Feb 19 00:35:29 crc kubenswrapper[4889]: I0219 00:35:29.890069 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/pull/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.070168 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/util/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.076864 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/pull/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.109228 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082bjcc_cb97c079-c478-4174-833e-6c5c8422db49/extract/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.252191 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/extract-utilities/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.453439 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/extract-utilities/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.498871 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/extract-content/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.507919 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/extract-content/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.656038 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/extract-utilities/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.703274 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/extract-content/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.902175 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/extract-utilities/0.log" Feb 19 00:35:30 crc kubenswrapper[4889]: I0219 00:35:30.943786 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-62kjl_4f5c2904-119c-4f2a-b0f1-1efdea92c06a/registry-server/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.117591 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/extract-content/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.119926 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/extract-utilities/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.123161 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/extract-content/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.281460 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/extract-utilities/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.302038 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/extract-content/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.564745 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lqbdl_15fb63a2-33a5-49fa-a5c6-854b8d5d3151/registry-server/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.567810 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-w7fwf_7be42c5f-0df1-4ab4-92d8-e47ca8047150/marketplace-operator/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.684905 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/extract-utilities/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.860813 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/extract-content/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.864121 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/extract-content/0.log" Feb 19 00:35:31 crc kubenswrapper[4889]: I0219 00:35:31.867029 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/extract-utilities/0.log" Feb 19 00:35:32 crc kubenswrapper[4889]: I0219 00:35:32.195350 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/extract-utilities/0.log" Feb 19 00:35:32 crc kubenswrapper[4889]: I0219 00:35:32.286145 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/extract-content/0.log" Feb 19 00:35:32 crc kubenswrapper[4889]: I0219 00:35:32.463540 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-94ccv_6a80ba53-e06d-4dfc-b26c-6f251cb67d26/registry-server/0.log" Feb 19 00:35:35 crc kubenswrapper[4889]: I0219 00:35:35.725550 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:35:35 crc kubenswrapper[4889]: E0219 00:35:35.725924 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:35:45 crc kubenswrapper[4889]: I0219 00:35:45.379315 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x78h7_211e50ac-fbca-44f5-8e00-d6462342ee96/prometheus-operator/0.log" Feb 19 00:35:45 crc kubenswrapper[4889]: I0219 00:35:45.403741 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4bcc864c-2m7fv_e9ab512a-6196-4d62-a32d-b869b3d080bf/prometheus-operator-admission-webhook/0.log" Feb 19 00:35:45 crc kubenswrapper[4889]: I0219 00:35:45.429612 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7f4bcc864c-x5w84_d7b1bacc-63b6-4446-a2b1-a1306e34f89c/prometheus-operator-admission-webhook/0.log" Feb 19 00:35:45 crc kubenswrapper[4889]: I0219 00:35:45.548624 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-q88r4_e398c065-2809-4a64-9ccc-801f6eb7d8b7/operator/0.log" Feb 19 00:35:45 crc kubenswrapper[4889]: I0219 00:35:45.583097 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-tr5l7_e0184625-e425-46d0-ab8a-13da9ced8a6f/perses-operator/0.log" Feb 19 00:35:46 crc kubenswrapper[4889]: I0219 00:35:46.725433 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:35:46 crc kubenswrapper[4889]: E0219 00:35:46.725705 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:36:01 crc kubenswrapper[4889]: I0219 00:36:01.725330 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:36:01 crc kubenswrapper[4889]: E0219 00:36:01.726257 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:36:13 crc kubenswrapper[4889]: I0219 00:36:13.725823 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:36:13 crc kubenswrapper[4889]: E0219 00:36:13.727052 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:36:28 crc kubenswrapper[4889]: I0219 00:36:28.726618 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:36:28 crc kubenswrapper[4889]: E0219 00:36:28.727796 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:36:40 crc kubenswrapper[4889]: I0219 00:36:40.727151 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:36:40 crc kubenswrapper[4889]: E0219 00:36:40.728370 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:36:50 crc kubenswrapper[4889]: I0219 00:36:50.890673 4889 generic.go:334] "Generic (PLEG): container finished" podID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerID="c257fd8f538cb5e6e328840a88cc9be038229ef43db10b7fc3021a40cd85ac88" exitCode=0 Feb 19 00:36:50 crc kubenswrapper[4889]: I0219 00:36:50.890789 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-zxtb7/must-gather-424th" event={"ID":"49a4b1a6-f755-4dbe-af2c-f3c6623653b8","Type":"ContainerDied","Data":"c257fd8f538cb5e6e328840a88cc9be038229ef43db10b7fc3021a40cd85ac88"} Feb 19 00:36:50 crc kubenswrapper[4889]: I0219 00:36:50.892179 4889 scope.go:117] "RemoveContainer" containerID="c257fd8f538cb5e6e328840a88cc9be038229ef43db10b7fc3021a40cd85ac88" Feb 19 00:36:51 crc kubenswrapper[4889]: I0219 00:36:51.724836 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:36:51 crc kubenswrapper[4889]: E0219 00:36:51.725094 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:36:51 crc kubenswrapper[4889]: I0219 00:36:51.748720 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxtb7_must-gather-424th_49a4b1a6-f755-4dbe-af2c-f3c6623653b8/gather/0.log" Feb 19 00:36:58 crc kubenswrapper[4889]: I0219 00:36:58.822395 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-zxtb7/must-gather-424th"] Feb 19 00:36:58 crc kubenswrapper[4889]: I0219 00:36:58.823598 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-zxtb7/must-gather-424th" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="copy" containerID="cri-o://df3437bdc56f7f70a1cfdaa40426ee4da2bf3a44e8d12271fa6b3ab822061b80" gracePeriod=2 Feb 19 00:36:58 crc kubenswrapper[4889]: I0219 00:36:58.829774 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-zxtb7/must-gather-424th"] Feb 19 00:36:58 crc kubenswrapper[4889]: I0219 00:36:58.969928 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxtb7_must-gather-424th_49a4b1a6-f755-4dbe-af2c-f3c6623653b8/copy/0.log" Feb 19 00:36:58 crc kubenswrapper[4889]: I0219 00:36:58.971012 4889 generic.go:334] "Generic (PLEG): container finished" podID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerID="df3437bdc56f7f70a1cfdaa40426ee4da2bf3a44e8d12271fa6b3ab822061b80" exitCode=143 Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.180196 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxtb7_must-gather-424th_49a4b1a6-f755-4dbe-af2c-f3c6623653b8/copy/0.log" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.180621 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.339719 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-must-gather-output\") pod \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.339810 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvvt4\" (UniqueName: \"kubernetes.io/projected/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-kube-api-access-nvvt4\") pod \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\" (UID: \"49a4b1a6-f755-4dbe-af2c-f3c6623653b8\") " Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.347169 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-kube-api-access-nvvt4" (OuterVolumeSpecName: "kube-api-access-nvvt4") pod "49a4b1a6-f755-4dbe-af2c-f3c6623653b8" (UID: "49a4b1a6-f755-4dbe-af2c-f3c6623653b8"). InnerVolumeSpecName "kube-api-access-nvvt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.404397 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "49a4b1a6-f755-4dbe-af2c-f3c6623653b8" (UID: "49a4b1a6-f755-4dbe-af2c-f3c6623653b8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.442513 4889 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.442556 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvvt4\" (UniqueName: \"kubernetes.io/projected/49a4b1a6-f755-4dbe-af2c-f3c6623653b8-kube-api-access-nvvt4\") on node \"crc\" DevicePath \"\"" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.995898 4889 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-zxtb7_must-gather-424th_49a4b1a6-f755-4dbe-af2c-f3c6623653b8/copy/0.log" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.998140 4889 scope.go:117] "RemoveContainer" containerID="df3437bdc56f7f70a1cfdaa40426ee4da2bf3a44e8d12271fa6b3ab822061b80" Feb 19 00:36:59 crc kubenswrapper[4889]: I0219 00:36:59.998550 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-zxtb7/must-gather-424th" Feb 19 00:37:00 crc kubenswrapper[4889]: I0219 00:37:00.024149 4889 scope.go:117] "RemoveContainer" containerID="c257fd8f538cb5e6e328840a88cc9be038229ef43db10b7fc3021a40cd85ac88" Feb 19 00:37:00 crc kubenswrapper[4889]: I0219 00:37:00.734929 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" path="/var/lib/kubelet/pods/49a4b1a6-f755-4dbe-af2c-f3c6623653b8/volumes" Feb 19 00:37:06 crc kubenswrapper[4889]: I0219 00:37:06.740379 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:37:06 crc kubenswrapper[4889]: E0219 00:37:06.741405 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:37:21 crc kubenswrapper[4889]: I0219 00:37:21.725449 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:37:21 crc kubenswrapper[4889]: E0219 00:37:21.726632 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:37:32 crc kubenswrapper[4889]: I0219 00:37:32.725456 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:37:32 crc kubenswrapper[4889]: E0219 00:37:32.726383 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:37:43 crc kubenswrapper[4889]: I0219 00:37:43.725240 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:37:43 crc kubenswrapper[4889]: E0219 00:37:43.726434 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.808112 4889 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwdck"] Feb 19 00:37:55 crc kubenswrapper[4889]: E0219 00:37:55.810266 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="extract-content" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810295 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="extract-content" Feb 19 00:37:55 crc kubenswrapper[4889]: E0219 00:37:55.810331 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="copy" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810340 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="copy" Feb 19 00:37:55 crc kubenswrapper[4889]: E0219 00:37:55.810353 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="gather" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810361 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="gather" Feb 19 00:37:55 crc kubenswrapper[4889]: E0219 00:37:55.810376 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="extract-utilities" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810385 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="extract-utilities" Feb 19 00:37:55 crc kubenswrapper[4889]: E0219 00:37:55.810397 4889 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="registry-server" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810404 4889 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="registry-server" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810564 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="gather" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810599 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa7c452-9337-495c-88ae-7d99174c6ccd" containerName="registry-server" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.810610 4889 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a4b1a6-f755-4dbe-af2c-f3c6623653b8" containerName="copy" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.813417 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.826076 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwdck"] Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.911047 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-utilities\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.911768 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkzbp\" (UniqueName: \"kubernetes.io/projected/df570973-72a1-4d7a-9095-52ca3247971a-kube-api-access-rkzbp\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:55 crc kubenswrapper[4889]: I0219 00:37:55.911993 4889 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-catalog-content\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.014123 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-utilities\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.014246 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkzbp\" (UniqueName: \"kubernetes.io/projected/df570973-72a1-4d7a-9095-52ca3247971a-kube-api-access-rkzbp\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.014281 4889 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-catalog-content\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.015008 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-utilities\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.015113 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-catalog-content\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.044418 4889 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkzbp\" (UniqueName: \"kubernetes.io/projected/df570973-72a1-4d7a-9095-52ca3247971a-kube-api-access-rkzbp\") pod \"certified-operators-mwdck\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.137531 4889 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.467035 4889 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwdck"] Feb 19 00:37:56 crc kubenswrapper[4889]: I0219 00:37:56.724980 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:37:56 crc kubenswrapper[4889]: E0219 00:37:56.725464 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:37:57 crc kubenswrapper[4889]: I0219 00:37:57.287497 4889 generic.go:334] "Generic (PLEG): container finished" podID="df570973-72a1-4d7a-9095-52ca3247971a" containerID="e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b" exitCode=0 Feb 19 00:37:57 crc kubenswrapper[4889]: I0219 00:37:57.287598 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerDied","Data":"e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b"} Feb 19 00:37:57 crc kubenswrapper[4889]: I0219 00:37:57.287944 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerStarted","Data":"77a8430b47969f344ef9f2683ffec904f32a430aec2b0e70c1392b15dd5a11e4"} Feb 19 00:37:58 crc kubenswrapper[4889]: I0219 00:37:58.297491 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerStarted","Data":"147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61"} Feb 19 00:37:59 crc kubenswrapper[4889]: I0219 00:37:59.312203 4889 generic.go:334] "Generic (PLEG): container finished" podID="df570973-72a1-4d7a-9095-52ca3247971a" containerID="147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61" exitCode=0 Feb 19 00:37:59 crc kubenswrapper[4889]: I0219 00:37:59.312291 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerDied","Data":"147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61"} Feb 19 00:38:00 crc kubenswrapper[4889]: I0219 00:38:00.323171 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerStarted","Data":"cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf"} Feb 19 00:38:00 crc kubenswrapper[4889]: I0219 00:38:00.347046 4889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwdck" podStartSLOduration=2.896985334 podStartE2EDuration="5.347019982s" podCreationTimestamp="2026-02-19 00:37:55 +0000 UTC" firstStartedPulling="2026-02-19 00:37:57.289296135 +0000 UTC m=+1903.253961126" lastFinishedPulling="2026-02-19 00:37:59.739330773 +0000 UTC m=+1905.703995774" observedRunningTime="2026-02-19 00:38:00.344857665 +0000 UTC m=+1906.309522656" watchObservedRunningTime="2026-02-19 00:38:00.347019982 +0000 UTC m=+1906.311684973" Feb 19 00:38:06 crc kubenswrapper[4889]: I0219 00:38:06.137921 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:38:06 crc kubenswrapper[4889]: I0219 00:38:06.138452 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:38:06 crc kubenswrapper[4889]: I0219 00:38:06.214835 4889 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:38:06 crc kubenswrapper[4889]: I0219 00:38:06.423324 4889 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:38:06 crc kubenswrapper[4889]: I0219 00:38:06.477311 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwdck"] Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.390150 4889 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwdck" podUID="df570973-72a1-4d7a-9095-52ca3247971a" containerName="registry-server" containerID="cri-o://cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf" gracePeriod=2 Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.806834 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.922489 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-catalog-content\") pod \"df570973-72a1-4d7a-9095-52ca3247971a\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.922595 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-utilities\") pod \"df570973-72a1-4d7a-9095-52ca3247971a\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.922672 4889 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkzbp\" (UniqueName: \"kubernetes.io/projected/df570973-72a1-4d7a-9095-52ca3247971a-kube-api-access-rkzbp\") pod \"df570973-72a1-4d7a-9095-52ca3247971a\" (UID: \"df570973-72a1-4d7a-9095-52ca3247971a\") " Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.924003 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-utilities" (OuterVolumeSpecName: "utilities") pod "df570973-72a1-4d7a-9095-52ca3247971a" (UID: "df570973-72a1-4d7a-9095-52ca3247971a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.929493 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df570973-72a1-4d7a-9095-52ca3247971a-kube-api-access-rkzbp" (OuterVolumeSpecName: "kube-api-access-rkzbp") pod "df570973-72a1-4d7a-9095-52ca3247971a" (UID: "df570973-72a1-4d7a-9095-52ca3247971a"). InnerVolumeSpecName "kube-api-access-rkzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 00:38:08 crc kubenswrapper[4889]: I0219 00:38:08.995290 4889 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df570973-72a1-4d7a-9095-52ca3247971a" (UID: "df570973-72a1-4d7a-9095-52ca3247971a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.024434 4889 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.024480 4889 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df570973-72a1-4d7a-9095-52ca3247971a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.024498 4889 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkzbp\" (UniqueName: \"kubernetes.io/projected/df570973-72a1-4d7a-9095-52ca3247971a-kube-api-access-rkzbp\") on node \"crc\" DevicePath \"\"" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.401669 4889 generic.go:334] "Generic (PLEG): container finished" podID="df570973-72a1-4d7a-9095-52ca3247971a" containerID="cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf" exitCode=0 Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.401726 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerDied","Data":"cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf"} Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.401757 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwdck" event={"ID":"df570973-72a1-4d7a-9095-52ca3247971a","Type":"ContainerDied","Data":"77a8430b47969f344ef9f2683ffec904f32a430aec2b0e70c1392b15dd5a11e4"} Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.401778 4889 scope.go:117] "RemoveContainer" containerID="cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.401777 4889 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwdck" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.421271 4889 scope.go:117] "RemoveContainer" containerID="147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.446644 4889 scope.go:117] "RemoveContainer" containerID="e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.453795 4889 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwdck"] Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.459184 4889 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwdck"] Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.480989 4889 scope.go:117] "RemoveContainer" containerID="cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf" Feb 19 00:38:09 crc kubenswrapper[4889]: E0219 00:38:09.482177 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf\": container with ID starting with cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf not found: ID does not exist" containerID="cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.482251 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf"} err="failed to get container status \"cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf\": rpc error: code = NotFound desc = could not find container \"cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf\": container with ID starting with cfe4c9fc2861c76682f17a32712e976281d4f1cbf647d54e66c679c1fd2624cf not found: ID does not exist" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.482314 4889 scope.go:117] "RemoveContainer" containerID="147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61" Feb 19 00:38:09 crc kubenswrapper[4889]: E0219 00:38:09.482900 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61\": container with ID starting with 147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61 not found: ID does not exist" containerID="147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.482941 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61"} err="failed to get container status \"147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61\": rpc error: code = NotFound desc = could not find container \"147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61\": container with ID starting with 147018be8256648cfe50b488b838f6635182f202f58a4633ea29e722cacb0f61 not found: ID does not exist" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.482965 4889 scope.go:117] "RemoveContainer" containerID="e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b" Feb 19 00:38:09 crc kubenswrapper[4889]: E0219 00:38:09.483436 4889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b\": container with ID starting with e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b not found: ID does not exist" containerID="e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.483505 4889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b"} err="failed to get container status \"e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b\": rpc error: code = NotFound desc = could not find container \"e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b\": container with ID starting with e4188b0ff4f1f5cad65f79068c72e293162b56a1e8f9c7d27c6072001d8e553b not found: ID does not exist" Feb 19 00:38:09 crc kubenswrapper[4889]: I0219 00:38:09.726155 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:38:09 crc kubenswrapper[4889]: E0219 00:38:09.727463 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:38:10 crc kubenswrapper[4889]: I0219 00:38:10.735705 4889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df570973-72a1-4d7a-9095-52ca3247971a" path="/var/lib/kubelet/pods/df570973-72a1-4d7a-9095-52ca3247971a/volumes" Feb 19 00:38:21 crc kubenswrapper[4889]: I0219 00:38:21.726411 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:38:21 crc kubenswrapper[4889]: E0219 00:38:21.727545 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:38:35 crc kubenswrapper[4889]: I0219 00:38:35.725203 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:38:35 crc kubenswrapper[4889]: E0219 00:38:35.726266 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:38:48 crc kubenswrapper[4889]: I0219 00:38:48.725535 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:38:48 crc kubenswrapper[4889]: E0219 00:38:48.726930 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:39:02 crc kubenswrapper[4889]: I0219 00:39:02.725065 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:39:02 crc kubenswrapper[4889]: E0219 00:39:02.726280 4889 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-pcmlw_openshift-machine-config-operator(900d194e-937f-4a59-abba-21ed9f94f24f)\"" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" podUID="900d194e-937f-4a59-abba-21ed9f94f24f" Feb 19 00:39:14 crc kubenswrapper[4889]: I0219 00:39:14.729057 4889 scope.go:117] "RemoveContainer" containerID="7659b1860bcd3046a583c18c921061ca9112500fc523eb7720a7629b2316a587" Feb 19 00:39:15 crc kubenswrapper[4889]: I0219 00:39:15.402444 4889 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-pcmlw" event={"ID":"900d194e-937f-4a59-abba-21ed9f94f24f","Type":"ContainerStarted","Data":"892c4e9044c51a046fed97ce7d6072c570d3ac12ca3fdad7cd81839afb3a8354"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515145456000024445 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015145456001017363 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015145451463016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015145451463015465 5ustar corecore